Comfyui controlnet preprocessor example reddit.

Comfyui controlnet preprocessor example reddit Choose a weight between 0. Make sure that you save your workflow by pressing Save in the main menu if you want to use it again. LATER EDIT: I noticed this myself when I wanted to use ControlNet for scribbling. Done in ComfyUI with lineart preprocessor and controlnet model and dreamshaper7. Is there something similar I could use ? Thank you I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: We would like to show you a description here but the site won’t allow us. Don't give criticism or your opinions on others painting styles onless asked. They must be original creations, not photographs of already-existing places. subreddit:aww site:imgur. With controlnet I can input an image and begin working on it. I'm trying to implement reference only "controlnet preprocessor". It's a preprocessor for a controlnet model like leres, midas, zoe, marigold I think cold may be needed to support it. Additional question. I've installed ComfyUI Manager through which I installed ComfyUI's ControlNet Auxiliary Preprocessors. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow F:\##_ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose. 4-0. All old workflows still can be used I'm trying to use an Open pose controlnet, using an open pose skeleton image without preprocessing. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ) into a structured feature map so that the ControlNet model can understand and guide the generated result. Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. And sometimes something new appears. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. It is used with "depth" models. bat you can run to install to portable if detected. Belittling their efforts will get you banned. 2 - Lora: Thickeer Lines Anime Style Lora Mix - ControlNet LineArt - ControlNet OpenPose - ControlNet TemporalNet (diffuser) Custom Nodes in Comfyui: - Comfyui Manager The problems with hands adetailer are that: If you use a masked-only inpaint, then the model lacks context for the rest of the body. control_mlsd-fp16) We would like to show you a description here but the site won’t allow us. What do I need to install? (I'm migrating from A1111 so comfyui is a bit complex) I also get these errors when I load a workflow with controlnet. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. The img2img source is the same photo, but colorized manually and simply, which shows SD the colors it should approximately paint. It is used with "normal" models. DWPose might run very slowly Welcome to the unofficial ComfyUI subreddit. Normal maps is good for intricate details and outlines. 5. May 12, 2025 · 現在ComfyUIのControlNetモデルバージョンは多数あるため、具体的なフローは異なる場合がありますが、ここでは現在のControlNet V1. It involves supplying a reference image, using a preprocessor to convert the reference image into a usable "guide image", and then used the matching controlnet model to guide the image generation alongside your prompt and generation model. Type in your console Depth_lres preprocessor. control_normal-fp16) When trying to install the ControlNet Auxiliary Preprocessors in the latest version of ComfyUI, I get a note telling me to refrain from using it alongside this installation. Reply reply More replies More replies More replies I was having trouble getting anything to look like the input image, but the issue was that I wasn't including the ControlNet (as I thought it was only needed for posing and I was having trouble loading the example workflows). If you are asked too or the title of the post asks for help in style, technique, etc. There is one for a preprocessor and one for loading an image. Select the size you want to resize it. I think the old repo isn't good enough to maintain. Example Pidinet detectmap with the default settings. 5-1. Install a python package manager for example micromamba (follow the installation instruction on the website). Workflows are tough to include in reddit Workflow Not Included May 12, 2025 · Then, in other ControlNet-related articles on ComfyUI-Wiki, we will specifically explain how to use individual ControlNet models with relevant examples. shows an example of using controlnet and img2img in a process. The controlnet part is lineart of the old photo which tells SD the contour it shall draw. e. In this example, we will guide you through installing and using ControlNet models in ComfyUI, and complete a sketch-controlled image generation example. EDIT: Nevermind, the update of the extension didn't actually work, but now it did. For those who have problems with the controlnet preprocessor and have been living with results like the image for some time (like me), check that the ComfyUI/custom_nodes directory doesn't have two similar folders "comfyui_controlnet_aux". So I decided to write my own Python script that adds support for more preprocessors. 1 Instruct Pix2Pix ControlNet 1. Sometimes you want to compare how some of them work. control_depth-fp16) In a depth map (which is the actual name of the kind of detectmap image this preprocessor creates), lighter areas are "closer" and darker areas are "further away" not quite. I found one that doesn't use sdxl but can't find any others. 1 Lineart ControlNet 1. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. I get a bit better results with xinsir's tile compared to TTPlanet's. e. ). Please keep posted images SFW. As of 2023-02-26, Pidinet preprocessor does not have an "official" model that goes Example depth map detectimage with the default settings. There is now a install. When you click on the radio button for a model type, "inverted" will only appear in the preprocessor popup list for the line-type of models, i. Hey there, im trying to switch from A1111 to ComfyUI as I am intrigued by the nodebased approach. I do see it in the other 2 repos though. It kinda seems like the best option is to have a white background, NOT invert input and use the scribble preprocessor, OR invert input in the UI but use no preprocessor. Fake scribble ControlNet preprocessor Fake scribble is just like regular scribble, but ControlNet is used to automatically create the scribble sketch from an uploaded image. It is also fairly good for positioning things, especially positioning things "near" and "far away". Example MLSD detectmap with the default settings . Not sure why the OpenPose ControlNet model seems to be slightly less temporally consistent than the DensePose one here. FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. Make sure you set the resolution to match the ratio of the texture you want to synthesize. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. May 12, 2025 · For example, in the image below, we used ComfyUI’s Canny preprocessor, which extracts the contour edge features of the image. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. For those who don't know, it is a technique that works by patching the unet function so it can make two passes during an inference loop: one to write data of the reference image, another one to read it during the normal input image inference so the output emulates the reference Welcome to the unofficial ComfyUI subreddit. I found that one of the better combinations is to pick preprocessor "canny" and use Adapter XL Sketch, or preprocessor "t2ia_sketch_pidi" and use a ControlLite model by kohya-ss in its "sdxl fake scribble anime" edition. (e. ComfyUI is hard. While depth anything does provide a new controlnet model that's supposedly better trained for it, the project itself is for a depth estimation model. When the controlnet was turned ON, the image used for the controlnet is shown on the top corner. I don't know why it didn't grab those on the update. 8 it/s). MLSD is good for finding straight lines and edges. But now I can't find the preprocessors like Hed, Canny etc in ComfyUi. 1 Shuffle ControlNet 1. example at the end of the filename, and placed my models path like so: d:/sd/models replacing the one in the file. Would you have even the begining of a clue of why that it. The current implementation has far less noise than hed, but far fewer fine details. 6. And above all, BE NICE. example I renamed it by removing the . There's a PreProcessor for DWPose in comfyui_controlnet_aux which makes batch-processing via DWPose pretty easy. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala can anyone please tell me if this is possible in comfyui at all, and where i can find an example workflow or tutorial. 8, among other things, the installer updated our global CUDA_PATH environment variable to point to 11. edit: nevermind, I think my installation of comfyui_controlnet_aux was somehow botched I didn't have big parts of the source that I can see in the repo. EDIT: I must warn people that some of my settings in several nodes are probably incorrect. I saw a tutorial, long time ago, about controlnet preprocessor « reference only ». Reference Only is a ControlNet Preprocessor that does not need any ControlNet Model. If you click the radio button " all " and then manually select your model from the model popup list, " inverted " will be at the very top of the list of all We would like to show you a description here but the site won’t allow us. trying to extract the pose). Apr 15, 2024 · Rather than remembering all the preprocessor names within ComfyUI ControlNet Aux, this single node contains a long list of preprocessors that you can choose from for your ControlNet. But it gave better results than I thought. Using Multiple ControlNets to Emphasize Colors: In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. Start Stable Diffusion and enable the ControlNet extension. json got prompt… c:\Users\your-username-goes here\AppData\Roaming\krita\pykrita\ai_diffusion\. Pidinet is similar to hed, but it generates outlines that are more solid and less "fuzzy". 3. 1バージョンモデルを例に説明し、具体的なワークフローは後続の関連チュートリアルで補足します。 - selected "OpenPose" control type, with "openpose" preprocessor, and "t2i-adapter_xl_openpose" model, "controlnet is more important" - used this image - received a good openpose preprocessing but this blurry mess for a result Normal map ControlNet preprocessor. And its hard to find other people asking this question on here. All fine detail and depth from the original image is lost, but the shapes of each chunk will remain more or less consistent for every image generation. 9 it/s to 1. You don't need to Down Sample the picture, this is only usefull if you want to get more detail at the same size unfortunately your examples didn't work. Is this possible? In A1111 I can set preprocessor to none, but ComfyUI controlnet node does not have any preprocessor input, so I assume it is always preprocessing the image (ie. We would like to show you a description here but the site won’t allow us. You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. You can also specifically save the workflow from the floating ComfyUI menu I am looking for a way to input an image of a character, and then make it have different poses without having to train a Lora, using comfyUI. /// Does anyone have a clue why I still can't see that preprocessor in the dropdown? I updated it (and controlnet too). You might have to use different settings for his controlnet. In other words, I can do 1 or 0 and nothing in between. Can I know how do you guys get around this? This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. 1. I tried to collect all the ones I know in one place. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow Jul 20, 2024 · site:example. I don't think the generation info in ComfyUI gets saved with the video files. Upload your desired face image in this ControlNet tab. Hook one up to vae decode and preview image nodes and you can see/save the depth map as a PNG or whatever. When you generate the image you'd like to upscale, first send it to img2img. Only select combinations work moderately alright. This makes it particularly useful for architecture like room interiors and isometric buildings. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. It is recommended to use version v1. It's such a great tool. Hey everyone! Like many, I like to use Controlnet to condition my inpainting, using different preprocessors, and mixing them. 5 denoising value. It's about colorizing an old picture. 1 Inpaint (not very sure about what exactly does this one) ControlNet 1. x) again, is because when we installed 11. RunComfy also provides AI Playground , enabling artists to harness the latest AI tools to create incredible art. You can load this image in ComfyUI to get the full workflow. There are controlnet preprocessor depth map nodes (MiDaS, Zoe, etc. Welcome to the unofficial ComfyUI subreddit. Speaking of Controlnet, how do you guys get your line drawings? Use photoshop find edges filter and then clean up by hand with a brush? It seems like you could use comfy AI to use controlnet to make the line art, then use controlnet against to use it to generate the final image. Where can they be loaded. You can also right click open in mask editor and apply a mask on the uploaded original image if it contains multiple people, or elements in the background you do not want the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But as it turned out, there are quite a lot of them. i am about to lose my mind :< Share Add a Comment Sort by: We would like to show you a description here but the site won’t allow us. I did try it, it did work quite well with ComfyUI’s canny node, however it’s nearly maxing out my 10gb vram and speed also took a noticeable hit (went from 2. Overgrown jungles, barren planets, futuristic cityscapes, or interiors, are just some examples of what is expected. Appreciate just looking into it. I am a fairly recent comfyui user. you input that picture, and use "reference_only" pre-processor on ControlNet, and choose Prompt/ControlNet is more important, and then change the prompt text to describing anything else except the chlotes, using maybe 0. 1 of preprocessors if they have version option since results from v1. Also, if this is new and exciting to you, feel free to post Posted by u/Sensitive-Paper6812 - 48 votes and 8 comments Backup your workflows and picture. , Canny, Lineart, MLSD and Scribble. I also automated the split of the diffusion steps between the Base and the Refiner models. Here is an example of the final image using the OpenPose ControlNet model. Download and install the latest CUDA (12. Not as simple as dropping a preprocessor into a folder. Load the noise image into ControlNet. I have used: - CheckPoint: RevAnimated v1. For example, we can use a simple sketch to guide the image generation process, producing images that closely align with our sketch. com dog. (Results in following images -->). r/comfyui • I made a composition workflow, mostly to avoid prompt bleed. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Rules 1. That node can be obtained by installing Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors custom node. The subject and background are rendered separately, blended and then upscaled together. This is the purpose of a preprocessor: it converts our reference image (such as a photo, line art, doodle, etc. Load your segmentation map as an input for ControlNet. Pidinet ControlNet preprocessor . View community ranking In the Top 10% of largest communities on Reddit Does comfyui support preprocess of image? In Automatic1111 you could put an image and it will preprocess it to depth/canny/etc image to be use. YOU NEED TO REMOVE comfyui_controlnet_preprocessors BEFORE USING THIS REPO. There are quite a few different preprocessors in comfyui, which can be further used in the same ControlNet. 1 Tile (Unfinished) (Which seems very interesting) Testing ControlNet with a simple input sketch and prompt. Example fake scribble detectmap with the default settings Welcome to the unofficial ComfyUI subreddit. In ControlNet, select Tile_Resample as a Preprocessor and Control_V11f1e_sd15_tile as a Model. Example normal map detectmap with the default settings. I don’t remember if you have to Add or Multiply it with the latent before putting it into the ControlNet node though it’s been a few since I messed with Comfy. yaml. It is not very useful for organic shapes or soft smooth curves. Leave the Preprocessor to None. Ty i will try this. 8. Depth_leres is almost identical to regular "Depth", but with more ability to fine-tune the options. 5, Starting 0. However, since a recent Controlnet update, 2 Inpaint preprocessors have appeared, and I don't really understand how to use them : Hi all! I recently made the shift to ComfyUI and have been testing a few things. (Results in following images -->) I love Comfyui, but it is difficult to set a workflow to create animations as easily as it can be done in Automatic1111. In my canny Edge preprocessor, I seem to not be able to go into decimal like you or other people I have seen do. So you'll end up with stuff like backwards hands, too big/small, and other kinds of bad positioning. Thank you so much! Is there a way to create depth maps from an image inside ComfyUI by using ControlNET like in AUTO1111? I mean, in AUTO i can use the depth preprossessor, but i can't find anything like that in Comfy. MLSD ControlNet preprocesor. But I don’t see it with the current version of controlnet for sdxl. Just drop any image into it. Apr 1, 2023 · If a preprocessor node doesn't have version option, it is unchanged in ControlNet 1. Please share your tips, tricks, and workflows for using this software to create your AI art. Hi guys, do you know where I can find preprocessor tile_resample for ComfyUI? I've been using it without any problem on A1111 but since I just moved the whole workflow to ComfyUI, I'm having a hard time making controlnet tile work in the same way to controlnet tile on A1111. The row label shows which of the 3 types of reference controlnets were used to generate the image shown in the grid. It is used with "mlsd" models. But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. 1 preprocessors are better than v1 one and compatibile with both ControlNet 1 and ControlNet 1. It does lose fine, intricate detail though. g. Mixing ControlNets At this point, you can use this file as an input to ControlNet using the steps described in How to Use ControlNet with ComfyUI – Part 1. The preprocessor for openpose makes the images like the one you loaded in your example, but from any image, not just open pose likes and dots. com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW. 1, Ending 0. com find submissions from "example. Certainly easy to achieve this than with prompt alone. The reason we're reinstalling the latest version (12. I went for half-resolution here, with 1024x512. Only the layout and connections are, to the best of my knowledge, correct. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. Segmentation is used to split the image into "chunks" of more or less related elements ("semantic segmentation"). Enable ControlNet, set Preprocessor to "None" and Model to "lineart_anime". Maybe it's your settings. DWPreprocessor First I thought it would allow me to add some iterative details to my upscale jobs, for example, if I started with a picture of empty ocean and added a 'sailboat' prompt, tile would give me an armada of little sailboats floating out there. 2. I was frustrated by the lack of some controlnet preprocessors that I wanted to use. This reddit community is for submitting your favourite digital or natural media **pictorial** creations of landscapes or scenery. Example depth map detectmap with the default settings . Here is ControlNetwrite up and here is the Update discussion. You just run the preprocessor and then use that image in a “Load Image” Node and use that in your generation process. I was wondering if anyone has a workflow or some guidance on how to to get the color model to function? I am guessing I require a preprocessor if I just load an image to the "Apply ControlNet" node. This works fine as I can use the different preprocessors. Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Controlnet can be used with other generation models. x, at this time) from the NVIDIA CUDA Toolkit Archive. If the input is manually inverted, though, for some reason the no-preprocessor inverted-input seems to be better. In this case, I changed the beginning of the prompt to include, "standing in flower fields by the ocean, stunning sunset". A lot of people are just discovering this technology, and want to show off what they created. THESE TWO CONFLICT WITH EACH OTHER. Set ControlNet parameters: Weight 0. Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Welcome to the unofficial ComfyUI subreddit. Be respectful 2. Run the WebUI. This is what I have so far (using the custom nodes to reduce the visual clutteR) . Nov 4, 2024 · RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. Segmentation ControlNet preprocessor . I'm just struggling to get controlnet to work. I'm struggling to find a workflow that allows image/ img input into comfy ui that uses sdxl. Controlnet can be used with other generation models. py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. Is there something like this for Comfyui including sdxl? We would like to show you a description here but the site won’t allow us. I hope the official one from Stability AI would be more optimised especially on lower end hardware. If so, rename the first one (adding a letter, for example) and restart ComfyUI. The Workflow Pose ControlNet. see the search faq for details. When loading the graph, the following node types were not found: CR Batch Process Switch. Once I applied the Face Keypoints Preprocessor and ControlNet after the InstantID node the results were really good When the controlnet was turned OFF, the prompt generates the image shown on the bottom corner. At the moment, the assembly includes Welcome to the unofficial ComfyUI subreddit. 4. Here are the Controlnet settings, as an example: Step 3: Modify your prompt or use a whole new one, and the face will be applied to the new prompt. ControlNet 1. Since we already created our own segmentation map there is Welcome to the unofficial ComfyUI subreddit. Reply reply We would like to show you a description here but the site won’t allow us. 0. 1 Anime Lineart ControlNet 1. server\ComfyUI\extra_model_paths. Hi, I hope I am not bugging you too much by asking you this on here. It is good for positioning things, especially positioning things "near" and "far away". Also, if you're using comfy, add an ImageBlur node between your image and the apply controlnet node and set both blur radius and sigma to 1. I have "Zoe Depth map" preprocessor, but also not the "Zoe Depth Anything" shown in the screenshot. psoax njz zvec ejjnn zkcny pkozf xndr rglgr dyle bhljz

Use of this site signifies your agreement to the Conditions of use