Comfyui nodes examples reddit.

Comfyui nodes examples reddit I know a bit of python and understand how the provided example works. This set of nodes is designed to give some Photoshop-like functionality within ComfyUI. And the parameter "force_inpaint" is, for example, explained incorrectly. This I love downloading new nodes and trying them out. An example is FaceDetailer / FaceDetailerPipe. 0), but it doesn't have yet the capability to transfer style from a single source image. Now in your 'Save Image' nodes include %folder. Honestly wouldn't be a bad idea to have an a1111 similar node workflow for easier onboarding. KSampler to VAE Decoder to Image Save. 5 models will do RgThree’s nodes, and probably some other stuff too - CivitAI is a great place to “shop”! I ended up building a custom node that is very custom for the exact workflow I was trying to make, but it isn't good for general use. Two nodes are used to manage the strings: in the input fields you can type the portions of the prompt, and with the sliders you can easily set the relative weights. - lots of pieces to combine with other workflows: Welcome to the unofficial ComfyUI subreddit. Here's a very interesting node 👍 However, I have three small criticisms to make: You need to run the workflow once to get the node number for which you want information and then a second time to get the information (or two more times if you make a mistake). (My python skills are appalling ; -))) For example, the KSampleAdvanced has inputs like 'steps' and 'end_at_step' which are set using other node's output (using the spaghetti), while 'cfg' or 'noise_seed' are set using input fields. This is great for prompts so you don't have to manually change the prompt in every field (for upscalers for example). The description of a lot of parameters is "unknown". getExtraMenuOptions. ) You can use any input type for these switches, the important thing is that the input in the switch matches the input required in the subsequent node. Open the . I love your new nodes! Regarding denoise levels, I tried lowering the denoise in the Ksampler, and it just gives me a blank area where the inpainting was supposed to happen. But when using it I find some tasks require a lot of repetitive clicking moving nodes out of the way mostly. (stuff that really should be in main rather than a plugin but eh, =shrugs= ) I think something of a sharpen node would also be great to add to a post-pro workflow. Are you saying that in ComfyUI, you do NOT need to state "txwx woman" in the prompt? 38 votes, 46 comments. I haven’t seen a tutorial on this yet. It's a pain in the ass to be forced to download weird anime checkpoints and a dozen obscure custom nodes, struggle to figure out why this thousand-node spaghetti soup doesn't work, and isolate the tiny section that I want to learn. Save it as safety_checker. Trying to make a node that selects terms for a prompt (similar with the Preset Text but with different terms per node). It is licensed under the Apache 2. bat file, it will load the arguments. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. The LoRA Caption custom nodes, just like their name suggests, allow you to caption images so they are ready for LoRA training. I’m afraid I’m not using this any more! The basic setup is there, though. However I would probably start with learning just the basic nodes before you move on to more complicated examples. , KSampler img2img. Virtuoso Nodes for ComfyUI. We would like to show you a description here but the site won’t allow us. masquerade-nodes-comfyui. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the sampler. I wanted to share a summary here in case anyone is interested in learning more about how text conditioning works under the hood. From the VAE Decoder node you take the image to an Image Preview node. Fast Groups Muter & Fast Groups Bypasser Like their "Fast Muter" and "Fast Bypasser" counterparts, but collecting groups automatically in your workflow. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. WAS node suite has a great high pass filter I always use in a blend node overlay. But instead of returning an options object, this one gets it passed in… Welcome to the unofficial ComfyUI subreddit. When I dragged the photo to ComfyUI, In the bottom left there are two nodes called "PrimitiveNode" (under "Text Prompts" group), Now, if I will go to Add Node->utils->Primitive, it will add a completely different node although the node it self called "PrimitiveNode", Same thing for "CLIP Text Encode" node. 5 so that may give you a lot of your errors. Every conceivable blend mode is available. If there was a preset menu in comfy it would be much better. If you look at the Refiner's KSampler you'll see the same process. 1 ComfyUI Workflow. Maybe the problem is figuring out if a node is useful? It could be more than just the nodes that output an image. Example of a "nice" node: Preview Image Feb 12, 2025 · In this article, we delve into the realm of ComfyUI's best custom nodes, exploring their functionalities and how they enhance the image generation experience. py in the custom nodes directory and it will be in your images/postprocessing node list. I didn't think I've have any chance of writing one without docs, but after viewing a few random Github repos of some of those custom nodes, I think I could do all but the more complicated ones just by following those examples. Unless someone did a node with this option, you can’t. Filter and sort from their properties (right-click on the node and select "Node Help" for more info). So as long as you use the same prompt and the LLM gets to the same conclusion, that’s the whole workflow. example: Is tag "2girl" in list --> do not save. https://github. For anyone still looking for an easier way, I've created a @ComfyFunc annotator that you can add to your regular python functions to turn them into ComfyUI operations. Hi, I am new to ComfyUI and this may be a bit of a dumb question. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. Belittling their efforts will get you banned. Just reading the custom node repos' code seems to show the authors have a lot of knowledge on how Comfyui works and how to interface with it, but I am a bit lost (in the large amount of code in ComfyUI's repo and the large amount of custom node repos) as to how to get started. if a box is in red then it's missing . If you are unfamiliar with break it is part of automatic1111. 4. Why? Because fuck you, that's why. 0 license and offers two versions: 14B (14 billion parameters) and 1. Hope this helps you guys as much as its helping me. I also needed to edit the WAS_Node_Suite. Efficiency Nodes Ultimate SD Upscale ComfyUI roop Checkpoint: epiCRealismSin with add_detail and epiCRealismHelper LoRAs, but those are just my preference - any SD1. I hated node design in blender and I hate it here too please don't make comfyui any sort of community standard. The way any node works is that the node is the workflow. I messed with the conditioning combine nodes but wasn't having much luck unfortunately. For example, 9 images. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. Idea: A custom loop button on the Side Menu, how much time you wanna loop it like Auto Queue with a cap and also make a controller node, by which loop count can be controlled by the values which comes from inside the workflow. One tool I would really like is something like the CLIP interrogator where you would give it a song or a sound sample, and it would return a string describing this song in a language and vocabulary that the AI understands. Are you looking for an alternative to sd web faceswaplab? If so, ComfyUI has face swapping nodes which you can install from the ComfyUI Manager. You just have to annotate your function so the decorator can inspect it to auto-create the ComfyUI node definition. Please keep posted images SFW. You can take a look at my AP Workflow for ComfyUI, which makes extensive use of Context and Context Big nodes, together with the Any Switch node, the Reroute node, and the new Fast Groups Muters/Bypassers. That node can be obtained by installing Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors custom node. For example, switching prompts, switching checkpoints, switching controls, loading images foreach, and much more. e extensions) that you know of that have a button on them? I was thinking about making my extension compatible with comfyUI but I am at a loss when it comes to placing a button on a node. Sometimes the node order is 1 ( step1: generate preview step2: vae encode) But sometimes the node order becomes 2 (step1: vae encode step2:upscale and so on. start with simple workflows . When you right click on a node, the menu is similarly generated by node. com)) . The subject and background are rendered separately, blended and then upscaled together. Section IV. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & noodles (without the underlying functionality that the custom nodes bring) and saves it as a JPEG next to each image you upload. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Using a 'Clip Text Encode (Prompt)' node you can specify a subfolder name in the text box. Here are some places where you can find some: ComfyUI Custom Node Manager. It looks like a cool project. Totally newbie in node development and I'm hitting a wall. from a folder a was node for saving output + a concatenate text, ( like this, I just have one node "title" for the full project, and this creat a new root folder for any new project ) and I have a different name node, (so folder ) for every output I need to save, and to avoid spagetti, I use SET node and GET node. Note that it will return a black image and a NSFW boolean. Anyways neat idea, hope to see further updates. com to make it easier for people to share and discover ComfyUI workflows. 136 votes, 59 comments. bat file. My gripe with nodes is that it inherently adds redundancy to any design workflows. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. Whenever I create connections between nodes, as shown in the image above, the order of the nodes becomes completely randomized. Ah, I’m sorry, I was pretty new to comfyui and didn’t know how to share workflows. Then you connect them to a switch node (on/off or bolean). Vid2QR2Vid: You can see another powerful and creative use of ControlNet by Fictiverse here. edited again to add: if you find yourself wondering how to run this experiment because you can't set your CFG below 0: do a little basic hacking to your nodes. Mirrored nodes, where if you change anything in the node or it's mirror the other linked node will reflect the changes. I'm not sure that custom script allows you to select a new checkpoint but what it is doing can be done manually with more nodes. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. ComfyUI-paint-by-example. Custom Nodes/extensions: ComfyUI is extensible and many people have written some great custom nodes for it. You just need to use Queue Prompt multiple times (Batch Count in Extra option) if you want to loop img, i build a Cache Node. I also had issues with this workflow with unusually-sized images. (You don't actually need to use the Text to Conditioning node here. Is it possible to create with nodes a sort of "prompt template" for each model and have it selectable via a switch in the workflow? For example: 1-Enable Model SDXL BASE -> This would auto populate my starting positive and negative prompts and my sample settings that work best with that model. Please share your tips, tricks, and workflows for using this software to create your AI art. It grabs all the Keywords and tags, sample prompts, lists the main triggers by count, as well as dowloads sample images from Civitai. It's usually pretty good at automatically getting the right stuff. A few new nodes and functionality for rgthree-comfy went in recently. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. Also you can listen the music inside ComfyUI. Since Loras are a patch on the model weights they can also be merged into the model: You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. Aside from it being in Japanese, the underlying concepts were not easily understood even after translating. I'm keen to have a go at making custom nodes. 150 workflow examples of things I created with ComfyUI and ai models from Civitai First, use the FastMuter to switch all the attached nodes off. The example given on that page shows how to wire up the nodes. edit:: im hearing alot of arguments for nodes. The DWPreprocessor node can be found at: Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. If you find it confusing, please post here for help or create an Issue in GitHub. High Frequency Strength High Frequency Size Low Frequency Strength Low Frequency Size Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow will help a lot. Now, my WAS Node Suite Load Image Batch and Save Image Extended nodes are working lovely again. Plus quick run-through of an example ControlNet workflow. I think it has something to do with this from: GitHub - Gourieff/comfyui-reactor-node: Fast and Simple Face Swap Extension Node for ComfyUI - Scroll down to troubleshooting. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. I know that several samplers allow for having for example the number of steps as an input instead of a widget you so you supply it from a primitive node and control the steps on multiple samplers at the same time. all you have to do is set the minimum cfg for a basic ksampler from 0 (hard-coded default last I checked) to Moondreamer and Llava are for prompting back text (summarize this, make me a list of keywords for that) and output text, this is more like it becomes the node you ask it to be, coding itself, and you can connect any type of input and any type of output, so you can input an image and output some text about that image, or you can input a number and get some random text out of that number, or On the Load Image Batch node, connect the filename_text output field to the text input of the Text to Conditioning node, connect the CONDITIONING output from that same node to the positive input of the sampler. What I want to do for starters is just make some "convenience" nodes, which are just combinations of default nodes. Hi. Maybe something like a frequency separation node would be hella useful. Also, if this is new and exciting to you, feel free to post I checked the documentation of a few nodes and I found that there is missing as well as wrong information, unfortunately. Also how many steps do you run at the end without recombining the latents is a balancing act. I haven't tried it yet, but seems like it can do pretty much what Node-Red (an event driven node based programming language) has this functionality so it could defintely work in a node based environment such as ComfyUI . try civitai . Also the hand and face detection have never worked. You can also use the UpscaleImageBy node so you don't have to enter sizes, or to decrease the number of nodes (but not spaghetti) there's the Ultimate SD Upscale node, but more efficient than what you already did is going to be a challenge. I tried some of the maths nodes but nothing wants to connect to anything: pulling a connecting noodle from INT out (primitive) highlights the 'a' input of the maths-> INT -> IntBinaryOperation node, but it fails to connect. Iterate through all useful nodes, walk backwards through the graph enabled all the parent nodes. I generated images from comfyUI. I should be able to skip the image if some tags are or are not in a tag list. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. I'm a basic user for now but I want the deep dive. Are there any ComfyUI nodes (i. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . It uses the amplitude of the frequency band and normalizes it to strengths that you can add to the fizz nodes. As you can see in the preview image of the relevant portion of the workflow, just the body information seems to have been generated. It's basically just a mirror. Thank you for your attention. These tools do make use of WAS suite. Mar 28, 2025 · Certain nodes will stop execution of the entire graph if they are missing inputs, others play nice and let your workflow continue. I tried to write a node to do that but so far i havent gotten far with it. You can get it here: A comfyUI node layout for nesting latents within latents (github. For example with the “quality of life” nodes there is one that enable to chose between your pictures from the batch which one you want to process further. py file in your comfyui main directory (pretty sure that's the file, IIRC). lastly, it generates the first preview(1) ). Or, at least, kinda. Seems like a tool that someone could make a really useful node with A node that could inject the trigger words to a prompt for lora, show a view of sample images, or all kinds of things etc. The Assembler node collects all incoming strings to combine them into a single final prompt. py file. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. com) I had some sucess with zooming in to a doorknob on a house. I see that ComfyUI is a better way to create. Going to python_embedded and using python -m pip install compel got the nodes working. I'm trying to get used to ComfyUI. Then, a maths operation of x2 to plug into a widget-converted-to-input on the Upscaler. My reasearch didnt yield much result so I might ask here before I start creating my custom nodes. I have two string lists in my node. Luckily, you have the plugin manager installed, open that up and click "install missing nodes" and it will try to grab them for you. See the high res fix example, particularly the second pass version. 5. The nodes available are: Blend Modes: Applies an image to another image using a blend mode operation. For your all-in-one workflow, use the Generate tab. Thank you. Really like graph editors in general and find it works great for SD. Hello r/comfyui, . So is there any suggestion to where to start, any tips or resource for me. Every time you run the . As usual with custom nodes: download the folder, put it in custom_nodes, and just launch Comfy. I absolutely 100% do not care how clever the author of the workflow is. So nodes are not better singularly, but they have their place. I hope you'll enjoy the custom nodes. LLaVA -> LLM -> AudioLDM-2 Example workflow in the examples folder inside github. Now, with each generation, you can automatically or manually get the desired image as input for the next node, (e. Is tag "looking at viewer" in list --> save. so I wrote a custom node that shows a Lora's trigger words, examples and what base model it uses. Like they said though, a1111 will be better if you don't understand how to use the nodes in comfy. 2), Anime Style, Manga Style, Hand drawn, cinematic, Sharp focus, humorous illustration, big depth of field, Masterpiece, concept art, trending on artstation, Vivid colors, Simplified style, trending on ArtStation, trending on CGSociety I agree that we really ought to see some documentation. A node hub - A node that accepts any input (including inputs of the same type) from any node in any order, able to: transport that set of inputs across the workflow (a bit like u/rgthree 's Context node does, but without the explicit definition of each input, and without the restriction to the existing set of inputs) Then find example workflows . I have them stored in a text file at ComfyUI\custom_nodes\comfyui-dynamicprompts\nodes\wildcards\cameraView. Suggestions? Welcome to the unofficial ComfyUI subreddit. Note that I am not responsible if one of these breaks your workflows, your ComfyUI install or anything else. I would like a way to view image examples of the checkpoint i have selected in the checkpoint loader node. It goes right after the DecodeVAE node in your workflow. It's similar to the concept of inheritance. Here's a basic example of using a single frequency band range to drive one prompt: Workflow Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. comfy_clip_blip_node. I'm just curious if anyone has any ideas. I don't know A1111 but I guess your AND was the equivalent to one of thoose. Not much else. The constant noise for whole batch doesn't exist in base comfy yet (there's PR about it), I made a simple node to generate the noise instead, which can then be used as latent input in the advanced/custom sampler nodes with "add_noise" off. That will get you up and running with all the ComfyUI-Annotation example nodes installed and you can start editing from there. Reply reply I am at the point where I need to filter out images based on a tag list. But I highly suggest learning the nodes, it's actually a lot of fun! lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. For example, I've trained a Lora of "txwx woman". In A1111, I would invoke the Lora in the prompt and also write "a photo of txwx woman". The KSampler node is the same node for both txt2img and img2img. Having a computer science background, I feel that the potential for ComfyUI is huge if some basic branching and looping components are added, to unleash the creativity of developers. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to (This post is addressed to ComfyUI users unless you're interested too of course ^^) Hey guys ! The other day on the comfyui subreddit, I published my LoRA Captioning custom nodes, very useful to create captioning directly from ComfyUI. However, if you are looking for a more extensive lab or studio like interface, there is an interesting project called 'facefusion' with the MIT License. g. I want to connect a node that outputs a string to this CLIP Text Encode node (instead of manually input text for the prompt) . Examples of "mean" nodes: KSampler, VAE Decode, Upscale with Model. It could be that the impact basic pipe node allows for the switch between widget and input as well. This tutorial does a good job breaking it down. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. The node itself (or better, the LLM inside of it) writes the python code that runs the process. Please share your tips, tricks, and… I found it extremely difficult to wrap my head around initially but after a few days of going through example nodes and the ComfyUI source I started being productive. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Disable all nodes. Iv searched for such a node or method but i havent found anything. Yes. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. I was getting frustrated by the amount of overhead involved in wrapping simple Python functions to expose as new ComfyUI nodes, so I decided to make a new decorator type to remove all the hassle from it. But I never used a node based system and also I want to understand the basics of ComfyUI. " - Background Input Node: In a parallel branch, add a node to input the new background you want to use. comfyui manager will identify what is missing and download for you . New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). 3B (1. I have Lora working but I just don’t know how to do controlnet with this Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. I had a similar issue last night with WAS Node and did the following and it seemed to fix my issue. Hi Reddit! In October, we launched https://comfyworkflows. PromptToSchedule and prompt parser node can help carry the loras to the sampler. Also has colorization options for workflow nodes via regex, groups and each node We would like to show you a description here but the site won’t allow us. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. I have like 500 loras tagged and organized and if you add a keyword at the end of your prompt <Dungeons and Dragons> it can activate a lora. EDIT: I must warn people that some of my settings in several nodes are probably incorrect. However, the other day I accidentally discovered this: comfyui-job-iterator (ali1234/comfyui-job-iterator: A for loop for ComfyUI (github. I think a1111 has this feature by default or as an extension. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. Any node that is part of a branch that is not useful is disabled. 0 + other_model If you are familiar with the “Add Difference” option in other UIs this is how to do it in ComfyUI. and remember sdxl does not play well with 1. Hey everyone. Soon, there will also be examples showing what can be achieved with advanced workflows. I got ChatGPT to help me understand what this node does. . Queue the flow and you should get a yellow image from the Image Blank. And above all, BE NICE. But it gave better results than I thought. r/comfyui • I made a composition workflow, mostly to avoid prompt bleed. The third example is the anthropomorphic dragon-panda with conditionning average. but it requires lots of fiddling to get the latents to line up nicely. Only the LCM Sampler extension is needed, as shown in this video. Warning. Batch on the latent node offers more options when working with custom nodes because it is still part of the same workflow. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. Yet, they both look the same in the sampler's class definition, they're all defined as INT/FLOAT with default, min, and max values. Also ComfyUI's internal apis are like horrendous. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel Nodes are not always better, for many task yes, but nodes can also makes things way more complicated, for example try creating some shader effects using node based shader editor - some things are such that a few lines code become a huge graph mess. I'm not sure if that approach is feasible if you are not an experienced programmer. The rest should be self-explanatory. /r/StableDiffusion is back open after the protest of Reddit killing open API access We would like to show you a description here but the site won’t allow us. One branch for captions and one branch for manual (usually called “text box”). - Composite Node: Use a compositing node like "Blend," "Merge," or "Composite" to overlay the refined masked image of the person onto the new background. What are your favorite custom nodes (or node packs) and what do you use them for? Node menu. I just published a video where I explore how the ClipTextEncode node works behind the scenes in ComfyUI. ComfyUI Neural Network Latent Upscale: Nodes:NNLatentUpscale, A custom ComfyUI node designed for rapid latent upscaling using a compact neural network, eliminating the need for VAE-based decoding and encoding. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. I'm using ComfyUI portable and had to install it into the embedded Python install. I had implemented a similar process in the A1111-WebUI back then, and the results were good, but the code wasn't suitable for publication. One could even say “satan tier”. The "Attention Couple" node lets you apply a different prompt to different parts of the image by computing the cross-attentions for each prompt, which corresponds to an image segment. The Wan2. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). Were the 2 KSampler needed? I feel that i could have used a bunch of ConditioningCombiner so everything leads to 1 node that goes to the KSampler. From the first KSampler you take the Latent to a VAE Decoder node (Converting it to a normal image). The first example is the panda with a red scarf with less prompt bleeding of the red color thanks to conditionning concat. txt but I'm just at a loss right now, I'm not sure if I'm missing something else or what. YouTube Thumbnail. The way you set it up in your example workflow is pretty straightforward the basic setup. text% and whatever you entered in the 'folder' prompt text will be pasted in. Also some of my ksamplers from other node packs were having issues loading the new samplers, which was interesting but not to big of a deal. true. Workspace Templates do help a ton to bring in some pre configured noodles but knowing how blender does it I ju Quite uninspired by AI audio at this point, I would like to hear my favorite artists produce music by any means, but I cant perceive a real message heart and soul or a message, feeling, human emotion to discover in something produced entirely by a device without a soul (even if its inspired by accident). Then go into the properties (Right Click) and change the 'Node name for S&R' to something simple like 'folder'. Been playing around with ComfyUI and got really frustrated with trying to remember what base model a lora uses and its trigger words. A lot of people are just discovering this technology, and want to show off what they created. For example, this is mine: Some nodes might be called "Mask Refinement" or "Edge Refinement. Something laid out like the webui. bat file with notepad, make your changes, then save it. Finding what nodes you need to do X or Y can be a massive headache and there are many nodes that either lack documentation entirely or have completely worthless documentation. Two nodes are selectors for style and effect, each with its own weight control slider. 24K subscribers in the comfyui community. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. If so, you can follow the high-res example from the GitHub. Reply reply More replies More replies More replies Fernicles SDTools V3 - ComfyUI nodes First off, its a good idea to get the custom nodes off git, specifically WAS Suite, Derfu's Nodes, and Davemanes nodes. I provide one example JSON to demonstrate how it works. For txt2img you send it an empty latent image which is what the EmptyLatentImage node generates. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. The ComfyUI node already exists (I've added it to the upcoming AP Workflow 7. Identify the useful nodes that were executed. Get the Reddit app Scan this QR code to download the app now Here are approx. Update the VLM Nodes from github. So you want to make a custom node? You looked it up online and found very sparse or intimidating resources? I love ComfyUI, but it has to be said: despite being several months old, its documentation surrounding custom nodes is god-awful tier x). Is there any ways to achieve this? Or should I look for a different node? May 12, 2025 · Wan2. 157 votes, 62 comments. Am I missing something? ComfyUI seems to have downloaded some models for face/hand detection on using this node for the first time, but I'm not seeing their output. Anyway have fun! I run some tests this morning. com/WASasquatch/comfyui-plugins. Hey everyone! Looking to see if anyone has any working examples of break being used in comfy ui (be it node based or prompt based). I've been using A1111, for almost a year. Welcome to the unofficial ComfyUI subreddit. Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). We need to generate a blank image to paint masks onto before doing anything else. Can someone please explain or provide a picture on how to connect 2 positive prompts to a model? 1st prompt: (Studio ghibli style, Art by Hayao Miyazaki:1. ComfyUI node suite for composition, stream webcams or media files in and out, animation, flow control, making masks, shapes and textures like Houdini and Substance Designer, read MIDI devices. Yeah, I was looking for one too, so I ported the Auto1111 plugin into a custom node. I am looking for a way to run a single node without running "the entire thing" so to speak. It's installable through the ComfyUI and lets you have a song or other audio files to drive the strengths on your prompt scheduling. The node author said it will be implemented in the next few days. /r/StableDiffusion is back open This is a question for any node developer out there. For example, if someone wanted to make multiple images at once in, say, A1111, they could just move the batch size slider. Legally the nodes can be shipped in any license because they are packaged separately from the main software and nothing stops someone from writing their own non GPL ComfyUI from scratch that is license compatible with those nodes. anyway. qcqyjz hbsmz odzk stgmht jaeli epkvmi zevr azqo sbm vdhwpo

Use of this site signifies your agreement to the Conditions of use