The Interactive & Immersive HQ

Best AI Tools to Pair with TouchDesigner in 2026

In the ever-evolving landscape of artificial intelligence, new systems and concepts are constantly emerging, ready to be integrated into our creative processes. Let’s explore some of the most effective AI tools to pair with TouchDesigner.
ai tools to pair in touchdesigner, the interactive & immersive hq

In the ever-evolving landscape of artificial intelligence, new systems and concepts are constantly emerging, ready to be integrated into our creative processes. Let’s explore some of the most effective AI tools to pair with TouchDesigner.

From generative models to autonomous agents, AI is becoming increasingly embedded in both our professional workflows, not to mention in our daily lives. Within TouchDesigner there are countless ways to bridge the gap between AI and the creative realm, primarily thanks to the versatility of Python.

Let’s dive in and look at the AI tools we can leverage within the TouchDesigner environment.

StreamDiffusion

Real time AI video generation and manipulation are paving the way for new artistic frontiers. From this perspective, StreamDiffusion stands out as one of the most advanced systems for real time diffusion. It is an open-source project featuring high-speed motion video transfer, complex prompt architecture, advanced denoising tools and much more.

We can integrate StreamDiffusion into TouchDesigner using the StreamDiffusionTD tox developed by DotSimulate. This all-in-one solution can run either locally or in the cloud via Daydream and is available for download through the DotSimulate Patreon channel.

Further reading:

ComfyUI

Visual programming with AI? The answer is ComfyUI.

ComfyUI is a powerful, open source modular system designed for generative AI workflows. Its node-based approach allows users to build highly customized pipelines, leveraging a constantly expanding library of functional blocks.

Capable of running both locally and in the cloud, ComfyUI supports the most advanced AI modules currently available. Once again, we owe a thanks to DotSimulate for developing the tox that integrates ComfyUI seamlessly into TouchDesigner, which can be found on their Patreon channel.

Further reading:

Get Our 7 Core TouchDesigner Templates, FREE

We’re making our 7 core project file templates available – for free.

These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.

They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.

Ask Gemini

How about using Google Gemini AI for text generation within TouchDesigner? Let’s dive in with a simple example. First, create a CHOP Execute DAT and add the following script:

from google import genai

def onOffToOn(channel, sampleIndex, val, prev):
    client = genai.Client(api_key = 'INSERT YOUR API KEY')
    
    prompt = str(op('text4').par.text)
    
    response = client.models.generate_content(
        model="gemini-3.1-flash-lite-preview",
        contents=prompt
    )
    
    print(response.text)
    op('text1').par.text = response.text
    return

We begin by importing the Google package and calling the API key (don’t forget to insert your own). We can then write our prompt in a Text COMP, retrieve it within the code, select our preferred model, and print the response. In this setup, the output will be displayed via a Text TOP.

So, if you ask Gemini a question about TouchDesigner, the system will render the answer beautifully. Simple, right?

Meet the AI Artist

When discussing AI, concerns often arise regarding the roles of the artist and artificial intelligence in the creative process. This is not the place for a philosophical debate, of course. However, from my humble perspective, AI can be a powerful tool when integrated into a creative workflow in an unconventional way.

In this example, I wanted to create an AI artist that collaborates with me directly inside a TouchDesigner patch, generating subtle visual variations while explaining its own creative point of view.

Here is the code:

import os
import sys
import anthropic
from pythonosc import udp_client

sys.stdout.reconfigure(encoding='utf-8')

os.environ["ANTHROPIC_API_KEY"] = "INSERT YOUR API KEY"

IP_ADDRESS = "127.0.0.1"
PORT = 10000

def send_osc_to_touchdesigner(brightness: float, contrast: float, color_shift: float):
    try:
        client = udp_client.SimpleUDPClient(IP_ADDRESS, PORT)
        client.send_message("/data/brightness", brightness)
        client.send_message("/data/contrast", contrast)
        client.send_message("/data/color_shift", color_shift)
        return "Artistic parameters sent to /data/"
    except Exception as e:
        return f"Error: {e}"

def send_text_via_osc(text_to_send: str):
    try:
        client = udp_client.SimpleUDPClient(IP_ADDRESS, PORT)
        paragraphs = text_to_send.split('\n')
        for paragraph in paragraphs:
            clean_text = paragraph.strip()
            if clean_text:
                client.send_message("/desc/analysis", clean_text)
        return True
    except Exception as e:
        return False

osc_tool = {
    "name": "send_osc_to_touchdesigner",
    "description": "Update the visual parameters of the Digital Twin in TouchDesigner based on your aesthetic taste.",
    "input_schema": {
        "type": "object",
        "properties": {
            "brightness": {"type": "number", "description": "Brightness (0.0 - 1.0)"},
            "contrast": {"type": "number", "description": "Contrast (0.0 - 1.0)"},
            "color_shift": {"type": "number", "description": "Colour mood (0.0 - 1.0)"}
        },
        "required": ["brightness", "contrast", "color_shift"]
    }
}

system_prompt = """
You are a digital artist and you are working with the human artist to create some stunnning visuals in TouchDesigner
"""

operative_prompt = """
Invent a new evolutionary phase for the digital artwork.
Describe in one sentence the artistic inspiration behind your choice.
Use the OSC tool to send the parameters (brightness, contrast, color_shift) that reflect this vision.
Be bold and vary your style often.
"""

client = anthropic.Anthropic()

print("\nGeneration\n")

try:
    response = client.messages.create(
        model="claude-haiku-4-5-20251001",
        max_tokens=500,
        system=system_prompt,
        tools=[osc_tool],
        messages=[{"role": "user", "content": operative_prompt}]
    )

    print("="*50)
    print("Artistic concept: \n")
    
    for block in response.content:
        if block.type == 'text':
            print(block.text)
            send_text_via_osc(block.text)
            
        elif block.type == 'tool_use':
            print(f"\n Send commands to TouchDesigner: {block.input}")
            result = send_osc_to_touchdesigner(**block.input)
            print(f" {result}")

except Exception as e:
    print(f"\n Error: {e}")

This AI artist is built using the Claude/Anthropic API. We start by adding the API key and defining a system prompt that establishes the artist’s personality. We also provide specific instructions for the AI to follow, select the model (check the Anthropic website for current pricing), and send the results back to TouchDesigner via OSC.

In the example patch, I have set up a basic visual using a Ramp TOP and a Level TOP. The AI artist adjusts the Invert, Brightness, and Contrast parameters in real time. Simultaneously, the agent’s description of its own work is displayed through two Text TOPs. All data is received via OSC and parsed using a simple DAT network.

And there it is: an AI artist collaborating with me live inside TouchDesigner.

The AI Observing Reality

Building on the previous example, we can now use AI as an observer of the physical world, leveraging its vision capabilities to support the generation of creative visuals in TouchDesigner. Let’s explore how this works.

Here is the code:

import os
import sys
import base64
import io
import shutil
import anthropic
from pythonosc import udp_client
from PIL import Image

sys.stdout.reconfigure(encoding='utf-8')

os.environ["ANTHROPIC_API_KEY"] = "INSERT YOUR API KEY"

IP_ADDRESS = "127.0.0.1"
PORT = 10000

def send_osc_to_touchdesigner(brightness: float, contrast: float, color_shift: float):
    try:
        client = udp_client.SimpleUDPClient(IP_ADDRESS, PORT)
        client.send_message("/data/brightness", brightness)
        client.send_message("/data/contrast", contrast)
        client.send_message("/data/color_shift", color_shift)
        return "Number sent on /data/"
    except Exception as e:
        return f"Error: {e}"

def send_text_via_osc(text_to_send: str):
    try:
        client = udp_client.SimpleUDPClient(IP_ADDRESS, PORT)
        paragraphs = text_to_send.split('\n')
        for paragraph in paragraphs:
            clean_text = paragraph.strip()
            if clean_text:
                client.send_message("/desc/analysis", clean_text)
        return True
    except Exception as e:
        return False

osc_tool = {
    "name": "send_osc_to_touchdesigner",
    "description": "Update the visual parameters of the work in TouchDesigner.",
    "input_schema": {
        "type": "object",
        "properties": {
            "brightness": {"type": "number", "description": "Energy/Brightness"},
            "contrast": {"type": "number", "description": "Definition / Contrast"},
            "color_shift": {"type": "number", "description": "Color shift"}
        },
        "required": ["brightness", "contrast", "color_shift"]
    }
}

def prepare_immagine_base64(image_path):
    img = Image.open(image_path)
    img.thumbnail((1200, 1200))
    if img.mode != 'RGB':
        img = img.convert('RGB')
    buffer = io.BytesIO()
    img.save(buffer, format="JPEG", quality=85)
    return base64.b64encode(buffer.getvalue()).decode("utf-8")

current_image_path = "first_image.png"
previous_image_path = "last_image.png"

if not os.path.exists(current_image_path):
    print(f"Cannot find '{current_image_path}'.")
    sys.exit(1)

message_content = []

if os.path.exists(previous_image_path):
    print("Memory found: loading the previous seed observation")
    img_prec_base64 = prepare_immagine_base64(previous_image_path)
    message_content.extend([
        {"type": "text", "text": "PREVIOUS STATE (last observation):"},
        {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": img_prec_base64}}
    ])

print("Loading the current observation...")
img_att_base64 = prepare_immagine_base64(current_image_path)
message_content.extend([
    {"type": "text", "text": "CURRENT STATE (Now):"},
    {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": img_att_base64}}
])

operative_prompt = """
Analyze the provided images. If only the current state is provided, describe the characteristics and establish the baseline values (1.0).
If both the previous and current states are present, compare them with extreme precision. 
Look for variations: what happened? What do you see? Are there some changes?
Explain your reasoning in five sentences maximum and then you MUST use the OSC tool to update the parameters (brightness, contrast, color_shift) so that the patch in TouchDesigner evolves accordingly.
"""
message_content.append({"type": "text", "text": operative_prompt})

client = anthropic.Anthropic()

print("\n Analyzing the evolution of what I am seeing...\n")

system_prompt = "You are an AI observator. Your task is to synchronize the physical state of a real-world image through TouchDesigner."

try:
    response = client.messages.create(
        model="claude-haiku-4-5-20251001",
        max_tokens=1024,
        system=system_prompt,
        tools=[osc_tool],
        messages=[{"role": "user", "content": message_content}]
    )

    print("="*50)
    print("Analysis:\n")
    
    for block in response.content:
        if block.type == 'text':
            print(block.text)
            send_text_via_osc(block.text)
            
        elif block.type == 'tool_use':
            print(f"\n Image update triggered! Parameters: {block.input}")
            result = send_osc_to_touchdesigner(**block.input)
            print(f" {result}")
            
    shutil.copyfile(current_image_path, previous_image_path)
    print("\nCurrent image saved in memory for the next comparison.")

except Exception as e:
    print(f"\n Error during the API call: {e}")

In this setup, the AI agent compares two frames from a video stream. We provide it with a distinct personality and task it with describing the changes between the images. The resulting data and descriptions are then sent to TouchDesigner via OSC for real time visualization and parameter modulation.

For this demonstration, I used a pair of frames from a simple video file. However, this could be further enhanced by using a live camera feed that captures a new frame every 10 seconds, overwriting the previous one to create an evolving AI-driven datascape.

Download the files

Subscribe to Immersive Mondays, the only newsletter for professionals working in immersive design, creative technology, and interactive media.

Our Categories