Comfyui sdxl upscale not working v4. Not sure about the other file formats as I've not had to use them. 0c since it seems less aggressive with the mask but will switch to the other if it's not working for me. Other info: This workflow Everytime I try to use SDXL 1. SDXL Examples. Ultimate SD upscale upscaler: R-ESRGAN 4x+, Ultimate SD upscale tile_width: 512, Ultimate SD upscale tile_height: 512,Ultimate SD upscale mask_blur: 8,Ultimate SD upscale padding:32 I saw people making high quality upscales while adding details but in my case its just messing the image up . An example might be using a latent upscale; it works fine, but it adds a ton of noise that can lead your image to change after going through to "/custom_nodes/" directory inside ComfyUI. Every time I try to create an image in 512x512, it is very slow but eventually finishes, giving me a corrupted mess like this. Comfyui SDXL-Turbo Extension with upscale nodes youtube r/lexfridman. im pretty sure its either in by default or one of those two Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. (cache settings found in config file 'node_settings. 28. I love to go with an SDXL model for the initial image and with a good 1. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. I made a preview of each step to see how the image changes itself after sdxl to sd1. safetensors and place the model files in the comfyui/models/clip directory. A lot of people are just discovering this technology, and want to show off what they created. 5 denoise Welcome to the unofficial ComfyUI subreddit. In I don't know if there is any other upscaler node that works, but the basic upscale methods all dont' do the job (nearest exact, bilinear, area, bicubic, bislerp) So yes to your question, up to this moment, all I can see working is putting the upscaler node right after the refiner sampler, when the leftover noise is cleared and the latent is Control net isn't working with SDXL, right? Any way, how about being more bold and use something like 0. Fix, Perturbed-Attention Guidance (PAG), Perp-Neg, ControlNet, Face Detailer, Refiner, object masking, and more. You could try with a Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. Each upscale model has a specific scaling factor (2x, 3x, 4x, ) that it is optimized to work with. 51 denoising. If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. My stuff. It works more like DLSS, tile The textual inversions are WORKING, when I simply: leave them in the folder where they are (and have always BEEN, back when they used to be recognized and working properly in the UI), and I can simply include them in my prompt, WITH said parenthesis and naming parameters intact, when I generate images. I'm creating some cool images with some SD1. Sadly, I only have V100 for training this checkpoint, which can only train with a batch size of 1 with a slow speed. Just type in a positive+negative prompt, and the bot will generate an image that matches your text. Barebones TurboXL (not a XL finetune merged with TurboXL) can produce decent quality in just 3 steps, which means a latent upscale refinement pass with Help with hands in SDXL/ComfyUI . 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Open comment sort options. follow the installation guides for each and then you can find my workflow here. Taking a look at that stacktrace, I would guess that it's passing in the wrong type to tile width/height in Ultimate Upscale - but again, showing or linking the workflow will make this a lot easier. The pixel upscale are ok but doesn't hold a candle to the latent upscale for adding detail. This AI-based upscaling is a game-changer for all sorts of visual work. Not such a great way to upscale images IMO, but I Hello. 5 Refine+ Upscale (without ControlNet) waiting for your advices . Download ae. Upscale your output and pass it through hand detailer in your sdxl workflow. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for ages with no luck. Is there any way I can iterate on the output of SDXL Turbo using Comfy UI? Upscale while adding "detailed faces" positive clip to an upscaler as input? Im new to ComfyUI, some help would be greatly appreciated 90% of workflows not working . The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. ThinkDiffusion - Img2Img. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Core Nodes. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. Reply reply u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. x for ComfyUI. To reproduce the preview not launching: Launch ComfyUI using run_nvidia_gpu. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. Upscale to unlimited resolution using SDXL Tile regardless with no VRAM limitations. Hi! I'm new to comfyui, does the sample image work as a "workflow save", as if it was a json with all the nodes? Tried with standard SDXL lora, didn’t work. SeaArt ComfyUI WIKI. Could you post a screenshot of you ComfyUI workflow. not the upscale, and then just basic-scale it to fit the dimensions of the final image. Parameters not found in the original repository: upscale_by The number to multiply the width and height of the image by. Contest Winners. Liked Workflows. Couldn't make it work for the SDXL Base+Refiner flow. ComfyUI won't take as much time to set up as you might expect. co/stabilityai/stIf you want to support this channel, I've Created by: Mad4BBQ: What this workflow does Extremely EASY to use upscaler/detailer that uses lighting fast LCM and produces highly detailed results that remain faithful to the original image. what you need is to either copy and drop them there or use a " A symbolic link (symlink) " in windows to basically shortcut the windows to link it to your other SD folders mainly either the vlads, easydiffusion or Automatic1111 directory. 3 GB VRAM via OneTrainer - Both U-NET and Text Encoder 1 is trained - Compared 14 GB config vs slower 10. ComfyUI_windows_portable\ComfyUI\models\upscale_models; Step 3: Download Sytan's SDXL Workflow. Also, if this is new and exciting to you, feel I have good results with SDXL models, SDXL refiner and most 4x upscalers. You can upscale in SDXL and Created by: Matt Weaver: simple image generation, then repeated 1. I just cant get it to work in Comfy, even though I imported the workflow from the author. 5 model but here, it's not working (note: work well Hi I tried running your work flow but the process stopped when it got to the upload controlnet node. 3d-models. I use Mtb nodes Restore face with 4x ultrasharp upscale model. v2. (do not use SD 1. You can't upscale a 832x1216 image to 1080x2800, without seriously stretching 31 votes, 70 comments. if you don't want that then i will send you two workflows one with upscale and other with no upscale Welcome to the unofficial ComfyUI subreddit. 5 models and I don't get good results with the upscalers either when using SD1. 5 approach is only slightly slower than just SDXL (Refiner -> CCXL) but faster than SDXL (Refiner -> Base -> Refiner OR Base -> Refiner) and gives me massive improvement in scene setup, character to scene placement and scale, etc, while not losing out on final detail. Text-to-Image Generation: Convert your ideas into visuals. 5 for the diffusion after scaling. Your math doesn't work. 3 FLUX. 5 models for archvis have better quality than sdxl. 5 models) select an upscale model. There is an Article here "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 506. 5 there is ControlNet inpaint, but so far nothing for SDXL. We recommend using a mix between SD1. 0 with Automatic1111 and the refiner extension. Also, if this is new and exciting to you, feel free to post, but don't spam all your work. Took forever and might have made some simple misstep somewhere, like not unchecking the 'nightmare fuel' checkbox. Simple SDXL Template. add a default image in each of the Load ComfyUI — SDXL Advanced — Daemon +Meta. For example: 896x1152 or 1536x640 are good resolutions. What is the recommended tile size for upscaling a 768x768 image by 2x? I do notice Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Join me as we embark on a journey to master the ar Load LoRA. 0d. Hence, it appears necessary to apply FaceDetailer I am looking for good upscaler models to be used for SDXL in ComfyUI. A few are obvious, but I'll list them anyway. I solved that with using only 1 steps and adding multiple iterative upscale nodes. ComfyUI is a node-based graphical user interface that allows you to visually construct image generation processes by connecting modules that represent different workflow steps. 5 Has 5 parameters which will allow you to easily change the prompt and experiment Toggle if the seed should be included in the file name or not Upscale to 2x and 4x in multi-steps, Custom nodes and workflows for SDXL in ComfyUI. 5 and SDXL for the diffusion, but you are free to use whichever model you like. Actually Simple. I have heard the large ones (typically 5 to 6gb each) should work but is there a source with a more reasonable file size. The node can be found in "Add Node -> latent -> NNLatentUpscale". Create and upscale images to 7168 x 9216 (or other sizes) using SDXL, Kohya High Res. DynamoXL-txt2img. 3 GB Config - More Info In Comments Hello! How are people upscaling SDXL? I’m looking to upscale to 4k and probably 8k even. Fooocus came up with a way that delivers pretty convincing results. The current ckpt is only trained for a small step number thus perform not well. Please stay tuned! Thank yuanhang for his effort! Both did not solved this, all is separated now and sd1. I went back to a good working flow that I had this morning and it seems to be working a lot better again, there must have been some wrong connection somewhere Will now change it to 25/10 instead of 20/20 maybe that even Upscale not recognized? How do you fix this problem: but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. Hopefully A1111 will get sorted out because that's the kind of layout I consider 'comfortable' lol. AI Animation using SDXL and Hotshot-XL! Full Guide Included! The results speak for Upscale to unlimited resolution using SDXL Tile regardless with no VRAM limitationsMake sure to adjust prompts accordinglyThis workflow creates two outputs with two different sets of settings share, run, and discover comfyUI workflows Unveil the magic of SDXL 1. 0 on comfyUI default workflow, weird color artifacts on all images. Contribute to SeargeDP/SeargeSDXL development by creating an account on GitHub. Openpose SDXL not working . This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. 0, this one has been Simple ComfyUI Img2Img Upscale Workflow If you don't really care HOW it works, you just want a workflow for upscale, then it's nice to have all the input boxes close together so you don't have to scroll around all the time. I tracked it down and downloaded loaded it. The way I've done it is sort of like that, as latent upscale doesn't work brilliantly. File metadata and controls. recognized by Civit AI. LoRA support. r/lexfridman. but 1. If you've heard of ComfyUI but aren't sure how it works with Stable Diffusion, especially SDXL workflows, this guide will help you get started. Updated: Dec 12, 2024. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. Actually Ultimate SD tiled upscale did a lot of heavy lifting on some of these images, such as no. Depending on the workflow swapping it in may or may not work if there are other nodes in the workflow expecting an SDXL model. It based on a workflow for SDXL09 by https://civitai. I am aware that the optimal resolution in 1024x1024, but whenever I try that, it seems to either freeze or take an inappropriate amount of time. CLIP Text Encode (Prompt) CLIP Vision Encode. 4, 7, 8 & 10. Please keep posted images SFW. Explains the higher step count. With regards to close up portraits and less complex scenes, SDXL is already quite good at those and hence not much fixing/refinement is required on those types of images. 0e. 5 has its own clip neg and positive that go to the pipe, still wont upscale the face wth sd1. Tutorial 7 - Lora Usage I switched to comfyui not too long ago, but am falling more and more in love. Best. v3. Download t5-v1_1-xxl-encoder-gguf, and place the model files in the comfyui/models/clip directory. It's working well with standard SD 1. Finally made a workflow for ComfyUI to do img2img with SDXL Workflow Included Share Sort by: Best. Please share your tips, tricks, and workflows for using this software to create your AI art. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. Loader SDXL. I also documented on my git that the hand fixing is not working always, especially when the picture is too zoomed out It's not practical to use in your workflow for every generation. 3. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. Search Ctrl + K. Tutorial 6 - upscaling. Im new to ComfyUI and struggling to get an upscale working well. Merge 2 images together (Merge 2 images together with this ComfyUI workflow) starting point to generate SDXL images at a resolution of 1024 x 1024 with txt2img using the SDXL base model and the SDXL refiner. Please share your tips, tricks, and workflows for using this software to In SD Forge impl, there is a stop at param that determines when layer diffuse should stop in the denoising process. zip. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions This SDXL (Refiner -> CCXL) -> SD 1. (instead of using the VAE that's embedded in SDXL 1. Examples below are accompanied by a tutorial in my YouTube video. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. However, the SDXL refiner obviously doesn't work with SD1. Collaborate outside of code BrushNet_SDXL_upscale. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. but don't spam all your work. 0K. 1 I get double mouths/noses. if you're really married to the tech-first approach, all the more reason. 548. 5 "Upscaling with model" and then denoising 0. He used 1. However, this Ultimate SD Upscale node takes extremely long time for a simple 2x upscale, I mean 10x longer than the same upscale in auto1111. 21K subscribers in the comfyui community. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. List of Templates. also 1. One fix in Automatic1111 is "hi res fix" which makes a low-res version of the image and then upscale it, or just make images at the lower resolution and upscale with whatever upscaling workflow works for you. 5x upscale but I tried 2x and voila, with higher resolution, the smaller hands are fixed a lot better. My Workflows. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. The default layout should work fine with SD 1. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Dreamshaper is amazing but the SDXL version of it is way behind because there's just not as much to work with yet and the time it's going to take to train all the newer stuff. I could not find an example how to use stable-diffusion-x4-upscaler Created by: kodemon: What this workflow does This workflow aims to provide upscale and face restoration with sharp results. If you want to specify an exact width and height, use the "No Upscale" version of the node and perform the upscaling separately (e. If it's fairly recent it should 'just work' but it's always possible they download broken due to changes in COmfyUI etc. It also has full inpainting support to make custom changes to your generations. I can regenerate the image and use latent upscaling if that’s the best way I’m struggling to find It generates images without consistecy because you are not connecting the nodes properly. (especially with SDXL which can work in plenty of aspect ratios). 0 to get more realistic skin / face during upscale. Members Online • SDXL-Turbo Animation | Workflow and Tutorial in the comments 0:11. Multiple passes with optional upscales — 1st : Sampling with Detail Daemon + metadata processing — 2nd : Upscale with Upscale Model — 3rd : Tiled Diffusion — or Learn about the UpscaleModelLoader node in ComfyUI, which is designed to load upscale models from specified paths. Hotshot-XL is a motion module which is used with SDXL that can make amazing animations. Plan and track work Code Review. Wildcard support. Now you can use the model also in ComfyUI! Workflow with existing SDXL checkpoint patched on the fly to become an inpaint model. You'll need to post either a picture of your workflow or a JSON (uploading to pastebin works fine) to really get some help. 5 model, i have no clue what is going on, i dont want to use sdxl cause its not great with details like some trained 1. Advanced Conditioning. Upload workflow. 0 reviews. ComfyUI handles . Ultimate SD Upscaler ===== Upscaling: Just like the "EXTRA" Tab in A1111 / Forge \ComfyUI\models\upscale_models. Hi! I have a very simple SDXL lightning workflow with an openpose Controlnet, and the openpose doesn't seem to do Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Make sure the resolutions you're working on are the ones you want, trying to make 512x512 images with SDXL won't work well. This is hard/risky to implement directly in ComfyUI as it requires manually loading a model that has every change except the layer diffusion change ComfyUI Academy. I'm having some issues with (as the title says) HighRes-Fix Script. 3. Members Online. Moreover it matters which sampler you use. This node is meant to be used in a workflow where the initial image is generated in lower resolution, the latent is upscaled and the upscaled latent is fed I have been using Comfyui for quite a while now and i got some pretty decent workflows for 1. Topics ai upscale image2image upscaler img2img image-upscaling image-upscaler image-upscale upscalerimage stable-diffusion comfyui comfyui-workflow You guys have been very supportive, so I'm posting here first. SDXL to FLUX CN + Upscaler (ControlNet, Wildcards, Loras, Ultimate SD Upscaler) Works with SDXL / PonyXL / SD1. SDXL's refiner and HiResFix are just Img2Img at their core — so you can get this same result by taking the output from SDXL and running it through Hi there. 5 is trained on 512x512, a lot of models are OK if you stretch that to 768x512, but any bigger and you end up with results like this. 0 Alpha + SD XL Refiner 1. Alpha. This workflow creates two outputs with two different sets of settings. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. And above all, BE NICE. Storage. Members Online • One-Appearance6949 Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. Table of Content <!-- TOC --> Searge-SDXL: EVOLVED v4. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). I was just looking for an inpainting for SDXL setup in ComfyUI. Details about most of the parameters can be found here. Go to this link and download the JSON file by clicking the button labeled Download Introduction. 0 and SD 1. I Also, if this is new and exciting to you, feel free to post, but don't spam all your work. Reply reply Leanoffff Duchesses of Worcester - In any event, 6GB VRAM is pretty scrimpy for SDXL. ; Come with Add SDXL Target Res JK🐉 node to fix SDXL Text Encode Target Resolution not working. Thanks. So not true barebones TurboXL then. ly/3r8AeQMHuggingface Upscaling model: https://huggingface. Make sure to adjust prompts accordingly. 5 Contribute to kijai/ComfyUI-SUPIR development by creating an account on GitHub. I upscaled it to a resolution of 10240x6144 px for us to examine the results. example here. MoonRide workflow v1. Upscale. Add Image | Latent Crop by Mask, Resize, Crop by Mask and Resize, Stitch nodes. 5 models sdxlfacedetail workflow. Hi! I'm having problems with loading upscale models. UltraBasic txt2img SDXL. Hello! I'm using SDXL base 1. 5 and then after upscale and facefix, you ll be surprised how much change that was Yeah the latest 1. I don't do much with SDXL so I'm just guessing about that. I get an empty list: EDIT: nvm, I deleted comfyUI manager and did a manual git pull, it's working * The result should best be in the resolution-space of SDXL (1024x1024). I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent Mind the spaghetti, it doesn't bother me (it just feels like working with a wiring harness) and the workflow changes too often to organize it. Then you send the result to img2img. 5x original size, with minimal changes to image content. 2 and resampling faces 0. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. Third Pass: Further upscale 1. Go to OpenArt main site. Discussion of science, technology, engineering, philosophy, history, politics, music, art, etc. Why has no one mentioned this?? Your math doesn't work. However, I kept getting a black image. UltraBasic img2img SDXL I'm working on an ultra Basic txt2img next so that people can still use this if they can't get the custom nodes to work :) 1:1 previews, Super upscale with Remacri to over 10,000x6000 in just 20 seconds with Torch2 & SDP. We use the add detail LoRA to create new details during the generation process. It abstracts the complexities of locating and initializing upscale models, making them readily available for further processing or inference tasks. Following Workflows. If the upscaled image looks "noisy" try changing the upscaling model to something like SDXL to FLUX CN + Upscaler (ControlNet, Wildcards, Loras, Ultimate SD Upscaler) Works with SDXL / PonyXL / SD1. I'm glad to hear the workflow is useful. This way it replicates the sd upscale/ultimate upscale scripts from A1111. Reply reply More replies. I played for a few days with ComfyUI and SDXL 1. There is an SDXL loader and sampler that might work better. safetensors and place This repository includes a custom node for ComfyUI for upscaling the latents quickly using a small neural network without needing to decode and encode with VAE. It's a bit of a mess at the moment working out what works with what. Haven´t gotten SDXL Ultimate Workflow is the best and most complete single workflow that exists for SDXL 1. Collaborate outside of code SUPIR to use FLUX instead SDXL models ? #159 opened Sep 2, 2024 by im working on basic SDXL workflow Reply reply More replies. It works even if your base model is SDXL or Fine-Tuned SDXL. pth and . 0 in the img2img tab it gives the NansException: "NansException: A tensor with all NaNs was produced in Unet. You can't upscale a 832x1216 image to 1080x2800, without seriously stretching and distorting the image. I have a much lighter assembly, without detailers, but gives a awesome . Leaderboard. It seems to produce faces that don't blend well with the rest of the image when used after combining SDXL and SD1. If you go above or below We're not actually upscaling using HiResFix, we are just using it to add more detail. Reply reply More replies More replies I try to use comfyUI to upscale (use SDXL 1. Less focus on Lex and focus on ideas, whether related to Lex Fridman Podcast or not. I wanted a flexible way to get good inpaint results with any SDXL model. I've already tried the stable-diffusion-x4-upscaler. Most SDXL checkpoints work best with an image size of 1024x1024. I compared with SD Ultimate upscale and StableSR. Not perfect but best one i found Reply reply Codeformer does work, but it's not the original model so it'll be less good. Conditioning Average. run K-sampler, feed that into Facedetailer Each of the ones below is a hit or miss in any specific situation, but one of them should work in any one case. . x for ComfyUI; Table of Content; Version 4. (workflow included) Share Add a I tested with Ultimate SD Upscale and ImpactPack's FaceDetailer nodes. Contribute to runtime44/comfyui_upscale_workflow development by creating an account on GitHub. All Workflows / Upscale. Just a simple upscale using Kohya deep shrink. Additionally, I need to incorporate FaceDetailer into the process. 5 models. Manage code changes Discussions. , ImageUpscaleWithModel -> ImageScale -> Welcome to the unofficial ComfyUI subreddit. Thanks for the tips on Comfy! I'm enjoying it a lot so far. NSFW Watermark VAE Model: I just use the normal SDXL VAE or whatever is baked into the checkpoint models. 2. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. When you go high the the controls are relaxed and the denoising is increased intentionally to allow Which are awesome for a 1 second generation, but they are not usable in my project because of the disfigured, deformed faces. Belittling their efforts will get you banned. Appreciate it! if not, install either comfyUI manager or comfyUI custom scripts by pythongosssss. but can As you can see, I defined the upscale_by value to be 1. New. 0 for ComfyUI - Now with support for SD 1. However it affects the quality not the Efficient Loader & Eff. In general most work OK. 5 models in ComfyUI but they're 512x768 and as such too small resolution for my uses. Top. Please do send what revision helped solve the issue of same. Release checkpoint (sdxl). Works with SDXL, SDXL Turbo as well as earlier version like SD1. v4a. If you have spotted same . Download clip_l. Reply reply Ferniclestix • • ComfyUI updated and the memory issues plaguing me are gone, so I can I am trying out using SDXL in ComfyUI. Nodes that can load & cache Checkpoint, VAE, & LoRA type models. ComfyUI — SDXL Advanced — Daemon +Meta. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. which is why it looks burry and crappy. Just checking saw the problem with Tiled sampler issue and converging same. Everytime, I generate an image, it takes up more and more RAM (GPU RAM utilization remains Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't AP Workflow 6. For example the alternating syntax of [man|dog] in A1111 would make the program alternate between a man and a dog each step, but in ComfyUI it doesn't work at all for some wack reason. X Upscale smaller images to at least 1024 x 1024, before you put them in to be in painted. Img2Img ComfyUI workflow. The upscale model loader throws a UnsupportedModel exception. It does not work as a final step, however. Then I vae encode back to a latent and pass that through the base/refiner again in work on your prompting. 24K subscribers in the comfyui community. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). In the background, what this param does is unapply the LoRA and c_concat cond after a certain step threshold. Here is my current hacky way of getting a latent type upscale but it is slow ok so your checkpoint folder and vae are probably empty in the main comfy ui portable folder. 15. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . I tested with different SDXL models and tested without the Lora but the result is always the same. Always use the latest version of the workflow json file with the latest version of the not sure about this specifically, but i do know that some of the syntax used in A1111 doesn't always work in comfy. It doesn't turn out well with my hands, unlucky. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10. Workflow Included I have ComfyUI manager; it's just not working when I try to install missing models. Upscale image by and upscale latent by nodes are great for this. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. improving your prompting not only gets you better results with less GPU time, but you'll find your ability to form concepts in your mind, and simply to think Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. You can upscale your image, generated by the SDXL Base+Refiner models, the Base/Fine-Tuned SDXL model, or the ReVision model, with one or two upscalers in sequence. 512×512 from the Civitai On-Site Generator, upscaled to 8x, with added detail, Fooocus is also one of the easiest Stable Diffusion interfaces to start exploring Stable Diffusion and SDXL specifically. safetensor. While the preview is always shown for the KSampler (Efficient) node, these other nodes start each run not showing a preview. Variations on Outputs: Not satisfied with the first image?The bot can produce multiple variations, giving you the freedom to choose the one that fits best. Ok solved it . what am I doing wrong? 4K image with workflow: https://bit. (25. g. It has many upscaling options, such as img2img upscaling and Ultimate SD Upscale upscaling. I recommend using one of the sdxl turbo merges from civitai and use an ordinary AD sd xl workflow with them not the official one. Fastest would be a simple pixel upscale with lanczos. However, I switched to Ultimate SD Upscale custom node. 6 denoise? :D I somehow prefer it without the upscale SDXL 1. 25 upscale to 2. What's the best upscale model? SD 1. Multiple passes with optional upscales — 1st : Sampling with Detail Daemon + metadata processing — 2nd : Upscale with Upscale Model — 3rd : Tiled Diffusion — or There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. Edit: you could try the workflow to see it for yourself. Although we suggest keeping this one to get the best results, you can use any SDXL LoRA. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. Anyone figured out how to get a good 2x latent upscale working with SDXL? I just get weird artifacts in the image when I try it in ComfyUI. Update img2mesh MV upscale method to achieve better results: SD15 img2img + SDXL Refine + Ultimate Upscale. This may not be perfect as I am a ComfyUI newbie, and I spent way to many hours making the lines look nice. ControlNet. Always wanted to integrate one myself. 5520x4296 ComfyUI workflows for upscaling. 6. 1-0. For SD1. ThinkDiffusion Welcome to the unofficial ComfyUI subreddit. 5 times the image so it’s not too large and left all other options at their default values. You should try to click on each one of those model names in the ControlNet stacker node and choose the path ComfyUI tutorial ComfyUI Advanced Tutorial 2. Add Crop and Stitch operation for Image Gen and Inpaint Group Nodes. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. Code. roller3d to install an upscaling node to create 4K images. Works well to generate 6MP image in SDXL on 8G VRAM. The quality of the output is much better than I gave up on latent upscale. My Modest contribution (Comfy UI Workflow I Use) : SDXL + FaceDetail+2 SD1. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. I had the same problem and those steps tanks performances as well. Solution: click the node that calls the upscale model and pick one. I’ll create images at 1024 size and then will want to upscale them. For SDXL models (specifically, Pony XL V6) HighRes-Fix Script Constantly distorts the image, even with the KSampler's denoise at 0. But I probably wouldn't upscale by 4x at all if fidelity is important. 10. Simply save and then drag and drop relevant image into your Ah, thanks for clarifying. * Use Refiner * Still not sure about all the values, but from here it should be tweakable Took hours of Googling to finally find a comfyUI highres scale that works well. Description. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. com/user/fitCorder, with some small changes mostly to pull it all Here's why you would want to use ComfyUI for SDXL: Imagine that you follow a similar process for all your images: first, you do text-to-image. Here are a few things I've learned along the way, some through experimentation and others through tips found around the webs. I'm always amazed that people tell you the exact type of CPU they have (which typically matters very little for SD), but not the type of GPU (which is the heart and soul of SD). You can easily utilize schemes below for your custom setups. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted Welcome to the unofficial ComfyUI subreddit. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. 5 GB RAM and 16 GB GPU RAM) However, I still run out of memory when generating images. Finally, you upscale that. 0. NO PROMPT NEEDED - It just works!! How to use this workflow Simply drag and drop the image you want to upscale into the BASIC SETTINGS group box, select your favourite SD 1. 5x-2x with either SDXL Turbo or SD1. 1 Fill Flux Fill Workflow Step-by-Step Guide Flux Fill is a powerful model specifically designed for image repair (inpainting) and image extension (outpainting). Conditioning (Combine) Conditioning (Concat) I'm running ComfyUI + SDXL on Colab Pro. 35. Flux High Res Fix. Welcome to the unofficial ComfyUI subreddit. Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. cliptextencode is not a custom node, it's one of the defaults, so you don't need to download anything. Thank you community! Nice, some of the refined images have a bit too much noise (like the background behind the orc) but the details are really good. Maybe needs to be trained specifically for the turbo model. This means if you learn how ComfyUI works, you will end up learning how Stable Diffusion works. SDXL most definitely doesn't work with the old control net. The latent upscaler is okayish for XL, but in conjunction with perlin noise injection, the artifacts coming from upscaling gets reinforced so much that the 2nd sampler needs a lot Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. -> you might have to resize your input-picture first (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. It's do-able but if you are new and just want to play, it's difficult. Read the description of the checkpoint. And bump I wonder if I have been doing it wrong -- right now, when I do latent upscaling with SDXL, I add an Upscale Latent node after the refiner's KSampler node, and pass the result of I try to use this model during upscale or Photon v1. CLIP Set Last Layer. bat; The preview on the custom nodes I named does not work at each Welcome to the unofficial ComfyUI subreddit. Once all is installed you should see something like this: follow the installation guides for each and then you can find my workflow here. Personally in my opinion your setup is heavily overloaded with incomprehensible stages for me. tool. Comfyui SDXL upscaler / hires fix . I vae decode to an image, use Ultrasharp-4x to pixel upscale. Regarding the Upscaling: In the A1111 implementation, tiled diffusion (Mixture of Diffusers) in combination with ControlNet Tile is the best performing high-res upscaler. But fortunately, yuanhang volunteer to help training a better version. Controversial It seems to be impossible to find a working Img2Img workspace for ComfyUI. Turbo-SDXL 1 Step Results + 1 Step Hires-fix upscaler. json') Able to apply LoRA & Control Net stacks via their lora_stack and cnet_stack inputs. json. But where do I put it in comfyui? I try to upscale SDXL output images and want to use the stable-diffusion-x4-upscaler. AP Workflow v3. 5 and sdxl but I still think that there is more that can be done in terms of detail. Giving 'NoneType' object has no attribute 'copy' errors. I did not have the tiling safetensor. 5 was trained on lowres, so some tools like resadapter or Kohya Deep Shrink may be necessary Reply reply Upscaling (How to upscale your images with ComfyUI) View Now. SDXL + COMFYUI + LUMA 0:45. Make sure you don't mix and match latent images from different models, or else the results will look deep fried. It's mostly an outcome from personal wants and attempting to learn ComfyUI. That's practically instant but doesn't do much either. 3K. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. Real time prompting with SDXL Turbo and ComfyUI running locally upvotes I know this is an old thread (in the world of AI) but I thought I would add my thoughts here since I have been working with Ultimate Upscale a lot lately, with very good results. comfyui comfy sdxl. After borrowing many ideas, and learning ComfyUI. 2x a ComfyUI upscale workflow would just use an Load Upscale img2img comfyui upscale workflow watermark + 1. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far Please keep posted images SFW. See notes field for suggested They are intended for use by people that are new to SDXL and ComfyUI. Not such a great way to upscale images IMO, but I Searge-SDXL: EVOLVED v4. SDXL Upscale Tests Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Contribute to nullquant/ComfyUI-BrushNet development by creating an account on GitHub. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. 5 checkpoints are really damn good. 5. 5 or sdxl and does use the standard ClipTextEncode. I think his idea was to implement hires fix using the SDXL Base model. 5. This is done after the refined image is upscaled and encoded into a latent. 0 with both the base and refiner checkpoints. I then down scale it as 4x is a little big. 3-Pass workflow: SD txt2img. 114 votes, 43 comments. uxxowk ghw qiygywz dboc znrwy oxybol ytlhld bysrg nzwafh nyhm