Artificial intelligence is paving the way for innovative approaches to visual assets generation. Let’s see how to integrate StreamDiffusion with TouchDesigner. Fasten your seatbelts.
First, what is StreamDiffusion?
As stated on the official GitHub repository, StreamDiffusion is a pipeline-level solution for real-time interactive generation. In brief, it is a diffusion pipeline conceived for the real time generation of visual assets. It is an open source project that is marked out by huge performance features compared to other diffusion-based systems.
From a technical perspective, StreamDiffusion is marked by several features:
- Stream batch: streamline the denoising processes through efficient batch operations
- Residual classifier-free guidance: reduce computational costs
- Stochastic similarity filter: reduce processing during video input thus improving GPU utilization efficiency
- Input-output queue: use an I/O queuing system to parallelize the streaming process
- Model acceleration tools: make use of several acceleration tools to optimize models and enhance performance
Beyond technicalities, StreamDiffusion is a powerful tool that allows us to integrate a fast and reliable AI system into our creative workflow. Most of all, we can use StreamDiffusion with TouchDesigner for our own interactive and immersive projects.
StreamDiffusion with TouchDesigner: StreamDiffusionTD
StreamDiffusion is a sophisticated AI system featuring a complex technical framework. It is mainly based in Python but, unless you are a professional Python developer or an AI scientist – and this is definitely not my thing – integrating StreamDiffusion in TouchDesigner can be tricky.
Luckily for us, there are some ready-to-use solutions out there. One is the StreamDiffusionTD tox developed by DotSimulate. It is a powerful tox that basically encapsulates all the StreamDiffusion features in an all-in-one component. StreamDiffusionTD tox is available for download on DotSimulate’s Patreon channel, so I encourage you to pay him a beer per month, it is worth it.
Get Our 7 Core TouchDesigner Templates, FREE
We’re making our 7 core project file templates available – for free.
These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.
They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.
Installing StreamDiffusion
So, after downloading the tox, simply drag it into TouchDesigner. Let’s have a look at the parameters.
First, let’s go to the Install tab. Here we can install everything as needed and you will find an installation guide as well. The installation process is as follows:
- Download the StreamDiffusion repository
- Install the repository and all the dependencies needed
- (Optional for Windows users): Install TensorRT SDK for deep learning inference on NVIDIA GPUs
- Create an ID on Hugging Face for models safetensor files
Now let’s move to the Settings tab and click on the Start Stream Pulse button. When pressed for the first time, it will download all the models.
The whole process might require time and patience, but such is life.
Requirements
To run the StreamDiffusionTD tox on our computers, these requirements must be satisfied:
- Windows 10 or 11
- NVIDIA graphic card with CUDA support
- Python 3.10.9
- CUDA Toolkit 11.8 or 11.2
- Git
I strongly advise you to follow the instructions, or you will end up trapped in compatibility and dependency issues. Just like me.
StreamDiffusionTD in action
Once the overall installation process is concluded, we can start having fun.
StreamDiffusionTD comes with three main features:
- Image-to-image generation
- Text-to-image generation
- Video-to-video generation
We can choose between image-to-image and text-to-image in the Settings 2 tab. All the parameters are available in the Settings 1 tab, while V2V parameters are available in the namesake tab.

Let’s setup a simple patch. First, we will use the text-to-image generation system. The flow is quite easy. We define our prompt in the parameter or via a Text SOP – in my case, a roaring lion – connect the container to a Null component and that’s it! To spice it up, we can use the Guidance Scale and Delta parameters.

Next, we can play with the image-to-image generation system. We can connect a Movie File In TOP to the StreamDiffusionTD tox and select an image, an image of myself in my case. Et voilà, this is the result!

Finally, we can make use of the video-to-video generation system. We load the movie – in this case, a lovely dog – in the Movie File In TOP and the algorithm gets to work. Of course, we can play with the tox parameters to add variations and nuances.
Needless to say, we can build complex workflows by taking advantage of everything we have at our disposal inside TouchDesigner. But now it’s your turn.
Wrap Up
StreamDiffusion is a powerful AI generation system and it opens the doors to endless possibilities. By using it in TouchDesigner our creative flow gets boosted. We can get the most of cutting-edge AI technologies to create new and fascinating experiences. As usual, the sky is the limit.