Using Stable Diffusion with TouchDesigner opens up innovative and creative opportunities with just a few lines of code. Let’s take a look at some examples.
In this article we already talked about what Stable Diffusion is and how can we use it inside TouchDesigner.
Just to sum up, Stable Diffusion is a deep learning model for image and video generation. It is based on powerful AI algorithms that allow users to create a wide range of stunning media assets.
In this tutorial we will deepen three Stable Diffusion models we can use in TouchDesigner: Structure, Style and Stable Fast 3D. Let’s go!

Stable Diffusion APIs are available on the Developer Platform website. In order to run the tutorial, remember to insert your API key in the scripts.
Structure
The Structure model generates images by maintaining the original structure of the original image. It can be useful to create artificial scenarios from real pictures, that can be further manipulated for creative purposes.
In our TouchDesigner patch we create a CHOP Execute DAT that is triggered by a simple button.
Here is the DAT code:
import requests
def onOffToOn(channel, sampleIndex, val, prev):
return
def whileOn(channel, sampleIndex, val, prev):
return
def onOnToOff(channel, sampleIndex, val, prev):
return
def whileOff(channel, sampleIndex, val, prev):
return
def onValueChange(channel, sampleIndex, val, prev):
response = requests.post(
f"https://api.stability.ai/v2beta/stable-image/control/structure",
headers={
"authorization": f"YOUR API KEY",
"accept": "image/*"
},
files={
"image": open("./Kitten.jpg", "rb")
},
data={
"prompt": op('prompt_structure').par.text ,
"control_strength": op('control_strength').par.const0value ,
"output_format": "jpeg"
},
)
if response.status_code == 200:
with open("./Super_Kitten.jpg", 'wb') as file:
file.write(response.content)
else:
raise Exception(str(response.json()))
return
The code makes a request for the image generation and outputs the result in jpeg, png or webp formats. We can define our prompt via a Text COMP and define the control strength via a Constant CHOP. The parameter controls how much influence the original image has on the generation.
In order to create the image, we firstly define the input image inside the CHOP Execute DAT – in this case, a lovely kitten – write the prompt, adjust the desired control strength and push the button.
For visualization purposes, we create a Movie File In TOP and connect it to a Null TOP, select the output file in the Movie File In and here it is: our super kitten in the sky.

Style
The Style model extracts elements from the input image and creates a styled new image based on the content of the prompt. Sounds cool, doesn’t it?
As for the Structure model, we create a CHOP Execute DAT that is triggered by a simple button.
Here is the DAT code:
import requests
def onOffToOn(channel, sampleIndex, val, prev):
return
def whileOn(channel, sampleIndex, val, prev):
return
def onOnToOff(channel, sampleIndex, val, prev):
return
def whileOff(channel, sampleIndex, val, prev):
return
def onValueChange(channel, sampleIndex, val, prev):
response = requests.post(
f"https://api.stability.ai/v2beta/stable-image/control/style",
headers={
"authorization": f"YOUR API KEY",
"accept": "image/*"
},
files={
"image": open("./Kitten.jpg", "rb")
},
data={
"prompt": op('prompt_style').par.text ,
"fidelity": op('fidelity').par.const0value ,
"output_format": "jpeg"
},
)
if response.status_code == 200:
with open("./Super_Kitten_02.jpg", 'wb') as file:
file.write(response.content)
else:
raise Exception(str(response.json()))
return
The flow is almost the same: make a request, set the output format and define the prompt via a Text COMP. In the Style model we can adjust the Fidelity parameter – via a Constant CHOP – the defines how much the ouput image style can resemble the input image style.
Get Our 7 Core TouchDesigner Templates, FREE
We’re making our 7 core project file templates available – for free.
These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.
They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.
We can define the input image inside the CHOP Execute DAT – yes, a lovely kitten – write the prompt, adjust the fidelity parameter and push the button.
We can visualize the result via a Movie File In TOP and voilà: here is a stylish version of our kitten.

Stable Fast 3D
The Stable Fast 3D model generates 3D assets form 2 input images. This feature can pave the way for huge creative experimentation in TouchDesigner by further manipulating and animating assets.
Let’s inspect the DAT code:
import requests
import trimesh
def onOffToOn(channel, sampleIndex, val, prev):
return
def whileOn(channel, sampleIndex, val, prev):
return
def onOnToOff(channel, sampleIndex, val, prev):
return
def whileOff(channel, sampleIndex, val, prev):
return
def onValueChange(channel, sampleIndex, val, prev):
response = requests.post(
f"https://api.stability.ai/v2beta/3d/stable-fast-3d",
headers={
"authorization": f"YOUR API KEY",
},
files={
"image": open("./Kitten.jpg", "rb")
},
data={
"texture_resolution" : 2048,
"foreground_ratio" : 1,
"remesh": "triangle",
"vertex_count" : -1
},
)
if response.status_code == 200:
with open("./3D_Kitten.glb", 'wb') as file:
file.write(response.content)
else:
raise Exception(str(response.json()))
#input_glb = "C:/Users/gianm/Documents/TouchDesigner/Stable_Diffusion_02/3D_Kitten.glb"
#output_obj = "C:/Users/gianm/Documents/TouchDesigner/Stable_Diffusion_02/3D_Kitten.obj"
#mesh = trimesh.load(input_glb)
#mesh.export(output_obj)
return
The flow is the same we have seen above. We define our input image – yes, a kitten again – make the request and adjust parameters. The model comes with four main parameters:
- Texture resolution: the resolution of the texture
- Foreground ratio: controls the padding around the object for processing
- Remesh: the remeshing algorithm used to generate the 3D model
- Vertex count: how many vertices will have the mesh
It is important to underline that the model outputs assets in glTF/glb format. I suggest you write the documentation about the format here.
So, in order to make use of the asset in TouchDesigner, we have to convert it into other formats such as FBX. To do so, there are two ways:
- Use an online converting tool (I used this one)
- Open the glb asset in Blender and convert it
I think there are other ways to do so but I am not a 3D expert, so feel free to find your way.
Once we have our asset in FBX we can upload it in the FBX component and connect it to the render environment. I added a wireframe MAT for visualization purposes. That’s all!

Wrap Up
By using Stable Diffusion with TouchDesigner we can fully take advantage of AI algorithms for creative experimentation. Starting from simple images, we are able to find innovative ways to enrich our projects and create unconventional visuals with a human touch. As always, the sky is the limit.