Skip to content

Stable diffusion taking too long reddit



 

Stable diffusion taking too long reddit. What happens? Have fun with it. These settings will keep both the refiner and the base model you are using in VRAM, increasing the image generation speeds drastically. It is limited to 253 characters. • 7 days ago. If prompting for something like "brad pitt" is enough to get Brad Pitt's likeness in stable diffusion 1. com. Includes support for Stable Diffusion. 4. But since its not 100% sure its safe (still miles better than torch pickle, but it does use some trickery to bypass torch which allocates on CPU first, and this trickery hasnt been verified externally) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users I had this after doing a dist upgrade on OpenSUSE Tumbleweed. wget your models from civitai. It is camera perspective. Change the resolution to desired. And maybe a half an hour to an hour to generate a showcase. . With refiner and no upscale I take between 18-20seconds per 1024x1024 image. I'm 100% sure this is something I've done, but I'm not sure what. 19053. also low system ram, slow hdd drive giving more time to load ckpts. How long is too long? I've made a few models but there was little information on how long I should train or at what learning rates. You can be "laying" in all directions, so training data probably contains various angles, left to right, right to left, on stomach, on back. It usually takes maybe 15 minutes or so to go through and select images. I don't use --medvram for SD1. (With simple tags) I once made a 1080 image and it took the whole night from 6 pm to 3 am. The size of the image is constant (1024 x 1024), and sometimes it takes roughly 3-4 minutes, while other times it nearly crashes my PC and says it will For me its just very inconsistent. I stopped using it, and I had updated my python to 3. The progress bar is nowhere near accurate, and the one in the actual console 1. 68, so you'd probably want to try that. Between them they cost more than $10. Around 7. So I've been testing out AnimateDiff and its output videos but I'm noticing something odd. It also runs out of memory if I use the default scripts so I have to use the optimizedSD ones. But without any further details, it's hard to give a proper advice. I feel like I haven't seen anyone else experiencing this long of Sort by: entropie422. My process (and desired results) are maybe a bit sideways of what others prefer, but for me, training on humans that are meant to be used in variable costumes/hairstyles etc, I find that between 3-5k steps, you get something that matches the subject and is pretty flexible. OP • 8 mo. pip3 install --upgrade b2. There even official statements from Adobe as there stable Diffusion model has well knows art in the training. I've made about 50 instagram model loras. The post above was assuming 512x512 since that's what the model was trained on and below that can have artifacting. 39s/it Batch4: 70. I understand that AMD isn't the best when it comes to stable diffusion but it seems like it Probably plaing a game for a long period of time can be more damaging to a graphics card than using a stable diffusion, since it will take more resources and generate more heat for a longer periof of time. I’m sorry I don’t know much about this, I’ve tried looking it up but I didn’t find anything very clear, so could you please explain a bit more. The "gold standard" seems to be [Images] * 100 so that's what I've been using. UPDATE: In the most recent version (9/22), this button is gone. There's a setting to save the first half of the process: Settings > Saving images/grids > Save a copy of image before applying highres fix. MuseratoPC. 3 GHz Dual-Core Intel Core i5 processor and 8GB of RAM. If you're using torch 1. Ultimately, the process of creating images in Stable Diffusion is self If your best sample image is happening sooner, then it's training too fast. Great for graphic design and photography. Right-click your boot drive drive, select "Properties" then "Disk Clean-up" on the General tab, make sure "Temporary files" is ticked. If you go to Stable Diffusion Webui on Github and check Issue 11063 you'll see it all discussed there. It's a bit annoying to have to reload the control net model everytime I send the picture to be made high res in Img2Img. More replies More replies. Experience Windows Feature Experience Pack 1000. ->images are generated normally and quickly ->stable diffusion is on the ssd->rtx 2060 super->16 ram->amd ryzen 7 2700 eight core it depends on your free RAM. I found that the use of negative embeddings like easynegative tends to “modelize” people a lot, makes them all supermodel photoshop type images. Things like "looking away", "serious eyes" helps get the details correctly. open a terminal. 168 adds the Style Fidelity slider for 'Balanced' to adjust the fidelity of the style being referenced. 4-0. 5, now I can just use the same one with --medvram-sdxl without having to swap. There were some other suggestions, such as downgrading pytorch. You can just place the new checkpoint in models directory, and switch between them in UI settings. SDXL model, 1024*1024 picture, I used 7 seconds, but sometimes I don't know what the reason is, the model has to be reloaded, resulting in too long generation time. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. To generate realistic images of people, I found that adding "portrait photo" in the beginning of the prompt to be extremely effective. Thanks so much, and I'm glad that you've picked up on my intent. to summarise my prompt to 253 characters. So basically, I have had SD set up with Automatic1111 for months. Here is my first 45 days of wanting to make an AI Influencer and Fanvue/OF model with no prior Stable Diffusion experience. I would also turn on the hires. I Solved Hands (for now) Tested with 1. Each vector adds 4KB to the final size of the embedding file. Relaunching SD fixed it as the default is automatic!! For those curious, my base image speed is now 21 seconds. You are a bigot. . A SSD would help, although I use HDD and just leave cmdr2/a1111 running 24/7 on my media server. Stable-diffusion get slower every iteration. 5s/it, but the Refiner goes up to 30s/it. Reply reply ExpressWarthog8505 Apr 15, 2023 · without powerful video card this pc is useless for stable diffusion. Generally, you can download the regular model, rename it to whatever you waifu-diffusion model file is named, and replace it in the folder. You select the Stable Diffusion checkpoint PFG instead of SD 1. I don’t have the money and I use Stable Diffusion mostly for work now but there is no budget for new Stable Diffusion is too slow today. Which I'm fine with. Study on understanding Stable Diffusion w/ the Utah Teapot. 1. Anyone experimented with different parameters workflows much? [Bug]: taking too long loading weights #9117 when starting stable diffusion, or when changing the model, it takes a long time, from 15 to 20 mins. The function is this: I'm at 1 it/s on my puny 1060. At 5-9k you get something very Stable Diffusion normally runs fine and without issue on my server, unless the server is also hosting a console only Minecraft server (does not use VRAM). But again, you can just read what people have said there and see if anything works. The only caveat is that it is nvidia only. 2. A few sets I use two person, painting, drawing, anime, CGI, unreal engine, 3d, render, deformed iris I believe this resulted in it overwhelming my RAM. ->images are generated normally and quickly->stable diffusion is on the ssd->rtx 2060 super->16 ram->amd ryzen 7 2700 eight core->i put these commands on web start set COMMANDLINE_ARGS=--xformers set SAFETENSORS_FAST_GPU=1 How long is too long? : r/StableDiffusion. py provided by the website. If you're running things from the commandline then it's maybe easier to run a webUI like a daemon and send commands. fix, and I am making them 512x512. Around 1. If that's not it then please provide actual generation parameters. All you gotta do is open up Command Prompt and set a limit for the folder, like so: vssadmin resize shadowstorage /on=c You need to use SAFETENSORS_FAST_GPU=1 when loading on GPU. 75s/it! The more noise you set, the faster it'll be. Preprocessing. AverageWaifuEnjoyer. Now offers CLIP image searching, masked inpainting, as well as text-to-mask inpainting. If you don't have a GPU, then 5+ minutes is totally normal. Nope, but because you have an AMD card, its using the GPU to do the heavy lifting. Steal liberally. : r/StableDiffusion. @edgartaor Thats odd I'm always testing latest dev version and I don't have any issue on my 2070S 8GB, generation times are ~30sec for 1024x1024 Euler A 25 steps (with or without refiner in use) Go to settings > stable diffusion > Maximum number of checkpoints loaded at the same time should be set to 2 > Only keep one model on device should be UNCHECKED. go to runpod. Generate a batch of 512x512's. 00 GHz. Another ~45 minutes to train. 0 VAE baked in has issues with I have an older Mac at home and a slightly newer Mac at work. I've created a 1-Click launcher for SDXL 1. get a server open a jupyter notebook. 0 + 0. residentchiefnz. lab2point0. I would expect 3090 to do much better than 10 seconds. It can easily be fixed by running python3 -m venv . 000. Aug 2, 2023 · 8GB VRAM is absolutely ok and working good but using --medvram is mandatory. It all depends on your hardware. on my GPU. To Join. Latent upscaler = any upscaler with "latent" in the name. 6 million images generated by Stable Diffusion, also allows you to select an image and generate a new image based on its prompt. Here is what that means, and how to test how many tokens are in your text prompt . I expect it's because when SD is loaded, your model is also loaded and the LoRA has to be content with whatever vram you have left over. It will just crop the firstpass image. It seems like with only a few exceptions, LDSR does the best job at upsampling - and also takes a couple orders of magnitude longer than the other upsamplers to do the job. Answered by missionfloyd on Apr 14, 2023. Make a bucket. /stable-diffusion-webui/venv/ --upgrade. Upscale study - from 768px. 5 models in the same A1111 instance wasn't practical, I ran one with --medvram just for SDXL and one without for SD1. It feels really random because it should just start the process of loading the model. Resolution matters a lot-512 is a lot faster than training at 768 ControlNet Reference-only v1. • 5 days ago. Switching Control Net models takes about a good 40 sec/one whole minute sometimes. I use a sanity prompt of "with blue hair" to identify when it becomes overtrained (loses the blue). About that huge long negative prompt list Comparison. This ability emerged during the training phase of the AI, and was not programmed by people. you may also have to update pyenv. ComeWashMyBack. Generally, non-latent upscalers had no lower limit for Denoising needed whereas latent need at least 0. I mean, I use colab and an 80 step image would take maybe a little over 1 minute. Hi there. 9vae. 0) I didn't use the step_by_step. 4 hours for 20 seconds is too slow. Another trick I haven't seen mentioned, that I personally use. Paolo Eleuteri Serpieri is the artist who drew Druuna for Heavy Metal magazine. After rebuilding always swap the cudnn files. I’m wondering if the cpu/mobo is the Running Stable Diffusion in 260MB of RAM! SargeZT has published the first batch of Controlnet and T2i for XL. Paper: "Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model". 5, and it only uses 2 tokens (words), then it should be possible to capture another person's likeness with only 2 vectors per token. SD Guide for Artists and Non-Artists - Highly detailed guide covering nearly every aspect of Stable Diffusion, goes into depth on prompt building, SD's various samplers and more. The maximum usable length of a Stable Diffusion text prompt is purportedly 77 tokens. cfg to match your new pyhton3 version if it did not so automatically. And I'm constantly hanging at 95-100% completion. this is interesting. 32s/it etc. select hi-res fix and set the firstpass width and height to 512. Otherwise the process is the same. Also there are a ton of benchmarks like furmark that do nothing but punish a GPU 24 hours a day and folks can run those for a long time. I have to use following flags to webui to get it to run at all with only 3 GB VRAM: --lowvram --xformers --always-batch-cond-uncond --opt-sub-quad-attention --opt-split-attention-v1 Image generation is painfully slow. Select the "SD upscale" button at the top. So what you do when you merge a cat model and a dog model at 50% is you've watered down each about 50% except for the things they I have only 2 GB VRAM on my local machine so I can't train textual inversions or Dream Booth models locally. I have the latest driver but due to it being an AMD driver, so the worst of the worst basicly, I also cant use Xformers. The server has an RTX3090 with 24GB of VRAM but it doesn't matter how small I make the image or the batch I'm not using hires. even though both inputs are static and uses no preprocessing. I was having issues with the xyz script, so I decided to reinstall Stable Diffusion. I’m not sure if I’m doing something wrong here, but rendering on my setup seems to be very slow and typically takes several minutes. Also, if you say the model "does nothing", then maybe your captioning was wrong, not necessary the training settings. I hate it. I think it involves Onxx/Onnx or something like that. theRIAA • 1 yr. On the card itself, things like the time it takes for an electrical signal to cross the chip at the speed of light is non-trivial. Interrogate CLIP used to take seconds. Find the one I like and enter it's seed into the seed box. hatzalam • 21 days ago. 0. 09s/it! Upscaling, using "None", x2, 20 steps and 0. Negatives: “in focus, professional, studio”. Marked as NSFW cuz I talk about bj's and such. let me give you my laptop specifics. You may want to try switching to the sd_xl_base_1. while sitting and standing are usually more straightforward, since they are simpler and the top of the body is somewhat similar in both case, That can be even I have similar Issues but its so random, some generations at 1024x1440 take 40 seconds and others 20 minutes, I cant seem to reproduce whats making it go fast or slow, since in a batch of 16 some images take 2 minutes while others take 10 times more to finish. Take note of phrases used in prompts that generate good images. • 8 mo. Although it is not yet perfect (his own words), you can use it and have fun. Avyn - Search engine with 9. Generating images in the web UI is painfully slow, like it could take 15 - 20 minutes to generate a simple 512x512 image. Say I stop and do other things for a few hours and come back later on. Now it takes 5-10 minutes. get a key from B2. My specs are AMD radeon RX 580 and 16 gigs of RAM. Humans. Never heard of RAMdisks but I'm willing to give it a go. The same generations will now take nearly twice as long. Processor Intel (R) Core (TM) i7-4510U CPU @ 2. etc. I'm running on latest drivers, Windows 10, and followed the topmost tutorial on wiki for AMD GPUs. Today I arrive at the same computer, using the exact same setup, but the generation speed has slowed to a crawl - estimating 18 hours for the same 1800 images etc. Is just me, or someone else is experiencing the same thing? Currently, Rope is the fastest swapper with the most features, and the easiest ui. This skips the CPU tensor allocation. Switching models seems to take a ridiculously long time for me. This is necessary for 40xx cards with torch < 2. 1000. It takes less than a sec. Most seemed to have success with the driver 531. 11 for some other stuff. Try running "Disk Clean-up" to see if the space is reclaimed. Yeah, and there’s a bigger problem with the limbs bending where there are no joints, or a woman having 3 legs. About 1 minute. Mar 28, 2023 · when starting stable diffusion, or when changing the model, it takes a long time, from 15 to 20 mins. share you workflow and tool you are using along with snapshot while image being generating showing the vram usages. Use BLIP for caption: Check this Sort by: impetu0usness. 5 and SD 2. for example the last out was: Batch 1: 51. I can understand why the first few iterations are fast for a single drawing because its just starting to generate the image, but why does Apr 14, 2023 · 1. 5). so i uninstalled it all. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. ha, SDL XL doesn't even work in Automatic1111 for me. OpenArt - Search powered by OpenAI's CLIP model, provides prompt text with images. good configuration for stable diffusion: 16gb ram, nvidia videocard with 8gb memory, ssd drive Doormatty. 5 to get a decent result. Instead, you need to go down to "Scripts" at the bottom and select the "SD Upscale" script. When you merge a checkpoint it takes those same things on the other checkpoint and averages their math out, so if on the dog one has one point at 7 and on the cat one it's a 3, then it would average it to a 5. i face swapped a video and it took 4 hours for 20 seconds. (At first I thought this was normal, until I've seen a videos of people generating an image in 5 seconds). What is so fundamentally different about LDSR that it pegs my 3090 for a minute or two to upsample a single image, and yet produces such great results? Sort by: NoesisAndNoema. This is especially painful since you need to switch to use the refiner. Dall-E 3 is so much better than Dall-E 2 at correctly rendering shadows, too (previous version would often cast illogical shadows). The longer the session goes the slower SD gets for me. Mar 6, 2023 · It takes over 4 seconds to do 1 iteration on 512x512 image generation. r/StableDiffusion. I’ll be sending out a new release soon that further improves the performance. 1. Do not use traditional negatives or positives for better quality. it is a different workload profile (less vram heavy, more clock heavy, i would imagine), but people have been abusing GPU's for a long time. 0_0. I also have a 3070, the base model generation is always at about 1-1. Fractalization/twinning happened at lower denoising as upscaling increased. I’m happy to answer any questions. Maybe 5 or 10 minutes to tag them. Things closer to the camera will appear bigger such as the head in the first picture and the legs in the second picture. Tick the save LORA during training and make your checkpoint from the best-looking sample once it's ~1000-1800 iterations. 21s/it Batch2: 64. IMG2IMG takes a long time to start. But for AI they are obsolete. Steal their prompt verbatim and then take out an artist. It takes about 5-8 seconds to generate an SDXL image on my 4080, but loading the model is still very slow. He published on HF: SD XL 1. 5 or 2. putting the phrase “very detailed illustration” before the prompt, and includng the phrase “in the style of Serpieri” in there gives me realistic hands 9 out of 10 times. snack217. ago. I'm running SD on a 2017 Macbook Pro running Ventura 13. 323. What usually was taking 4h was now 130h. When I try to use the IMG2IMG method in Stable Diffusion with ControlNet, for some reason it takes 3-4 minutes after pressing generate before it starts loading the Controlnet Model and performing the steps. Rebuilding the venv is a vital step so make sure you do that after reinstalling the drivers. I don't know if it's a problem with my internet, my location or something else. 5s/it as well. That's an unfortunate workaround, as it limits many prompts that are found online that apparently work on other models of stable diffusion. For anyone unsure, here's how I'm using it now. Did you also try, shot on iPhone in your prompt? Put something like "highly detailed" in the prompt box. Jun 24, 2023 · Here too. But you're right that it's basically generating a lower res image, and then resizing it your desired res using img2img. Edit: nvm, it still takes a long time roop unleashed taking too long for videos. Some forks, like automatic1111's, allow you to switch models on the fly. clamp (0. 8), (something else: 1. or maybe you are using many high weights,like (perfect face:1. Before these fixes it would infinitely hang my computer and even require complete restarts and after them I have no garuntee it's still working though usually it only takes a minute or two to actually develop now. 00GHz 2. fix, so the image generated would be 1024 x 1024 from the initial 512 x 512. If you use any other sampling method other than DDIM halfway through the frames it suddenly changes the seed / image itself to something vastly different. Running on a 4090, it takes ~20 seconds to generate 9 512x512 images. It makes the process more seamless though. • 10 mo. 0 + Automatic1111 Stable Diffusion webui. But that hasn't been brought into the new website yet. b2 authorize-account the two keys. My image generation is waaaay too slow. 1 to create your txt2img. - However, when I want to do certain adjustments to an When I train a person LoRA with my 8GB GPU, ~35 images, 1 epoch, it takes around 30 minutes. Buying anything new is not in the cards for a couple of years. GPU is gtx 3080 with 10gb vram, cpu is 5960x. •. This is entirely specific to controlnet - I'm still getting 14-15 it per second in basic txt2img. Yo I downloaded stable diffusion last night but I've been having issues with the length of time it takes to generate images, anywhere from 30 minutes to around 2 hours. Includes the ability to add favorites. 8), (perfect hands:1. You can find the training parameters, process, and sample datasets here . Remix. And the time it takes for enough electrons to accumulate on a transistor and flip a bit is where the hard limit is. maybe you need to check your negative prompt, add everything you don't want to like "stains, cartoon". right click on the download latest button to get the url. Open task manager and look at performance. It's probably creating temporary files and not deleting them on shutdown. Yes, the System Volume Information folder was taking up loads of space. Royal-Procedure6491. First, I had to reinstall the venv folder a couple of times, until it finally loaded, but image generation is absurdly slow. I have used the positive prompt: marie_rose, 3d, bare_shoulders, barefoot, blonde_hair, blue_eyes, colored nails, freckles on the face, braided hair, pigtails, Note: The positive prompt can be anything with a prompt related to hands or feets. Training models. When I started playing around with SD and other AI image generators, I really struggled to understand what any of the setting parameters actually do, since the information about them was and still is really spread out all over the place and frequently incorrect. Lumpy-Passenger2529. As a tip, your output resolution doesn't have to be a square. 32 GB/s is peanuts, and 64 ns is a long, long wait. 2 mins does seem a bit long. It would seem more time efficient to me due to the capability of a larger sample size, and also return a higher quality output to use a modified fork meant to run on lower VRAM hardware. I used the official huggingface example and replaced the model. Agree. your friend with 8 GB could have little to nothing running in the background with a fresh windows install vs you running a bunch of youtube videos, spotify and discord running on an old install with a lot of processes running in the background. Even in the default 512 by 512 and with 20 sampling steps a image takes 6 minutes. If I have SD running (not doing anything, just idling!), the LoRA takes 2hrs for 1000 steps. I used two different yet similar prompts and did 4 A/B studies with each prompt. 67s/it Batch3: 68. YMMV, of course. fix produces more photorealistic results, and I find that to be true. ADMIN MOD. Non-latent upscaler = any upscaler without "latent" in the name. But, in the end, it really depends on the usage and how well the card is taken care of. 4 with a 2. I have 16GB of RAM but I see it's constantly hitting a limit when trying to change models, on occasion everything freezes and I have to go away and come back like 20 minutes later (or restart). So for example, if I have a 512x768 image, with a full body and smaller / zoomed out face, I inpaint the face, but change the res to 1024x1536, and it gives better detail and definition to the area I am Keep in mind that these cards were often used for 24/7 bitcoin mining. Dope thanks, hopefully I won't have an issue with safetensors files still lol, I will try it out soon. But Stable Diffusion is too slow today. The cmdr2 webui I'm using simply leaves it loaded in memory AFAICT. 67. super long generation times. Or just let yourself be inspired. And it is now correctly rendering different photo styles like infrared (it even understands now that clothing and eyes render differently in infrared). However, if you run it on a very slow PC it might take a long time, it might even run on the CPU if no Nvidia graphic card is available. Then yep, its way too much. So I'm using an Automatic 1111 amd fork with a 5600 XT. You probably need to specify floating point data type : image = (image / 2. 5 because I don't need it so using both SDXL and SD1. Usually, on the first run (just after the model was loaded) the refiner takes 1. go to the stable-diffusion folder INSIDE models. 0, doesn't matter. The AI Diffusion plugin is fantastic and the firefly person that made it who if on reddit needs a lot of support. If I go into Stable Diffusion after booting up my pc, generations will take roughly 40-50 seconds for a 512x768. • 1 yr. I could buy some virtual machines on the cloud and run the training there. Reason why: it was set to unbound, which means it can create as many backups as it wants before eventually taking all your space and never giving it back. The post just asked for the speed difference between having it on vs off. So 4 seeds per prompt, 8 total. Also change the optimisation in settings > stable diffusion to SDP instead of automatic and give it a My GTX 1060 3 GB can output single 512x512 image at 50 steps in 67 seconds with the latest Stable Diffusion. The workflow would be like this: - I would use txt to img to generate an image. Settings for all eight stayed the same: Steps: 20, Sampler: Euler a, CFG scale: 7, Face restoration: CodeFormer, Size: 512x768, Model hash: 7460a6fa. 4, 1. Since I pay per hour, I wanted to know how long does training textual inversions and Dream Booth models usually take? I tried SD 1. Original -> x2 (512tile) -> x2 (512tile) -> x3 (512tile) -> x1 (1200tile). Despite most of the advice I see I've had better The body portions are not incorrect. 13 (the default), download this and put the contents of the bin folder in stable-diffusion-webui\venv\Lib\site-packages\torch\lib. Background: About a month and half ago, I read an article about AI Influencers racking in $3-$10k on Instagram and Fanvue. My bet is, that both models beeing loaded at the same time on 8GB VRAM causes this problem. Flip through here and look for things similar to what you want. If you do 512x512 for SDXL then you'll get terrible results. Very slow rendering. I'm guessing you'll see your GPU doing nothing at 0% and your CPU is doing the work at a small fraction of the speed. If Minecraft is using even 1GB of RAM of the 16Gb available, Stable fails to finish. If I kill SD and then train the LoRA, it takes 6 mins. GeneralShop1950. Playing PC games doesn't run GPU at 100% all the Yep, and every single test I've seen has shown that a few good negatives are far better than a wall of useless ones. there so many simple people that failed school but are good at art thinking AI steals art and have no clue at all. 0, 1. hello everyone, i have a laptop with a rtx 3060 6gb (laptop version obv) which should perform on an average 6 to 7it/s, in fact yesterday i decided to uninstall everything and do a complete clean installation of stable diffusion webui by automatic1111 and all the extensions i had previously. I gathered from here that hires. When inpainting, you can raise the resolution higher than the original image, and the results are more detailed. Join. Towards the end of the Discord bot, they added a comment when the bot detects too many tokens. It stucks on "processing" step and it lasts forever. In addition, adding facial expressions description is also helpful to generate different angles. safetensors in general since the 1. Inpainting in Automatic1111 taking waaaaay too long (Randomly) I have been inpainting using Automatic1111 and for whatever reason it seems to be completely random as to how much time it takes. 8), try decreasing them as much as posibleyou can try lowering your CFG scale, or decreasing the steps. If you don't have one, then it runs on CPU and takes about 200 times longer. One method I use quite frequently is to use GPT-4 / ChatGPT / BingChat etc. You shouldn't stray too far from 1024x1024, basically never less than 768 or more than 1280. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5x noise took betwen 3-5 minutes. ja ht ln qo sj cb wy dm rx hh