When generative AI meets node programming, a new star is born. Let’s see how to effectively integrate ComfyUI with TouchDesigner with a step-by-step guide.
Let’s start from scratch. ComfyUI is a free and open source node-based artificial intelligence workflow engine. As stated on the official website, the goal of ComfyUI is building the operating system for generative AI.
It is an extremely powerful tool with lots of features and an active community of developers. At its core, ComfyUI is a system for generating visual assets through AI. What sets it aside from other similar tools – such as Midjourney or Stable Diffusion – is the capability to create complex networks of nodes. We can then customize our pipeline and create our own AI system.
ComfyUI: how it works
ComfyUI can be freely downloaded on the dedicated website and on GitHub. A machine with adequate GPU is strongly required in order to get the most from the tool. A standalone packaged complete ComfyUI Windows version can be downloaded as well. We will use it for our creative exploration. The installation process can take time, so relax and in the meantime you can read the latest articles of The Interactive & Immersive HQ blog 😊
ComfyUI comes with several interesting features for image and video generation. I strongly encourage you to surf through the different examples available inside the engine.
Here we will have a look at the very basic image generation example. ComfyUI is based on nodes – visual objects that comes with parameters we can modify to get our desired result. Seems having a lot in common with TouchDesigner, doesn’t it?
So, going left to right we can find the foundation nodes:

- Load Checkpoint: the very first step is to load a diffusion model that will be used for denoising operations
- CLIP Text Encode: this node is in charge of prompt writing. We can include two nodes both for positive and negative prompting
- Empty Latent Image: in this node, a batch of empty latent images are created to be noised
- KSampler: this is the beating hearth of ComfyUI. This node uses the provided model and the prompts to de-noise the latent image. Here we can modify several parameters that will deeply affect the final result: the seed value, the control value after generation, the amount of steps and the sampler name, the scheduling and denoising methods
- VAE Decode: the result of the KSampler is sent to this node that decodes the latent images into pixel space images
- Save Image: as the name suggests, the final step is saving the final output and displaying it inside the window
As you can see, the basic process is quite simple and straightforward. We can add tons of complexities as well as experimenting with different tensors, custom LoRA or ControlNet models.
Get Our 7 Core TouchDesigner Templates, FREE
We’re making our 7 core project file templates available – for free.
These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.
They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.
ComfyUI and TouchDesigner
ComfyUI and TouchDesigner have a lot in common, starting from the node-based environment. So what about integrating ComfyUI in TouchDesigner?
It is worth to notice that ComfyUI is a complex ecosystem, so at first glance it would be quite intimidating. Luckily for us, there is someone who already did the dirty work for us.
ComfyTD is an operator that allows TouchDesigner to interface with ComfyUI, thus integrating gen AI workflows into TouchDesigner. The operator has been developed by DotSimulate, I encourage you to pay him a beer on Patreon and have a look at his projects.
So, after having downloaded the ComfyTD operator we can start to have fun.
Let’s put the operator in a new patch. In order to set up everything right, first of all we have to set the correct path of the Base folder in Comfy parameters tab.

Next we click on the Launch ComfyUI Pulse button. This will open a terminal window. After the launch has been completed, we will be able to access ComfyUI in the browser.
So let’s generate an image. Since I have to admit that I am quite tired after a long year of hard work, what I’m dreaming about is having a nap under a tree, surrounded by green fields and mountains in the background. So I write my prompt in the Clip Text Encoder node, start the Run button and voilà, here is my image!
Now, in order to work with ComfyUI data in TouchDesigner, we need a couple of steps more. First of all, go into the Settings and flag Dev Mode. This will allows us to export the workflow as API. Then click on Workflow > Export (API), cut the file and paste it into the ComfyUI > API folder.
Then go back to TouchDesigner, open the ComfyUI parameters, go to the API Config tab and click on the Toggle All Config Pulse tab. Now go to the API Parameters tab, click on the Generate Pulse button and now we will be able to visualize the generated image in TouchDesigner.
Most of all, now we will be able to control ComfyUI parameters right inside TouchDesigner. Are you ready to take off while I am finally going to have a nap?
Wrap Up
Integrating ComfyUI in TouchDesigner paves the way to a revolutionary approach to visual assets generation. By mixing gen AI node-based workflows and TouchDesigner operators, a new frontier is just in front of us. As usual, the sky is the limit!