From buttons and knobs to machine learning algorithms, there are several ways we can create interactive artworks in TouchDesigner. Let’s see how.
Tangible interaction
As the name suggests, tangible interaction involves using devices with physical properties that are immediately recognizable and usable by the user. But how can we integrate tangible devices into TouchDesigner?
Buttons and Keyboards
Don’t look at me like that. Even in the age of generative AI, traditional interaction devices can still be used in creative and meaningful ways.
Take keyboards, for example. We can integrate a keyboard system into our interactive installation and ask users to write a prompt, which will then drive generative visuals using StreamDiffusion or other GenAI tools. Sounds interesting, doesn’t it?
User interfaces
A great user experience is one of the core goals when creating functional interactive environments. UI is therefore pivotal in achieving this purpose. There are plenty of tools available for designing UIs that can communicate with TouchDesigner to trigger user events. Alternatively, we can develop custom UIs directly within TouchDesigner.
A user interface doesn’t necessarily have to be physical—it can be projected onto a surface as well.
Smartphone-based interfaces
TouchDesigner offers extensive support for networking and communication protocols: OSC, TCP/UDP, WebSocket, and more. It also allows us to run Python scripts and manipulate data in real time.
With that in mind, users’ smartphones can become a powerful element in creating collective interactive experiences. How? We can build a custom UI using HTML/JavaScript that users access through their phones. The data collected from the phones can then be sent to TouchDesigner via OSC, parsed, and used to control parameters such as shape position or color. Best of all, an unlimited number of users can connect simultaneously.
Sounds nice? In the following articles, you’ll find more details on the topic:
- Why WebSockets Are Useful for Immersive Experiences
- How to Manage Multiple WebSocket Connections in TouchDesigner
- WebSocket Architecture: How to Send Data From TouchDesigner to cables.gl

Intangible interaction
Intangible interaction occurs when users engage with systems that have no visible or physical interfaces – no buttons or tangible UIs, at least not in the physical sense.
There are many devices we can use in TouchDesigner to create such experiences. Let’s take a look.
Get Our 7 Core TouchDesigner Templates, FREE
We’re making our 7 core project file templates available – for free.
These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.
They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.
Computer vision devices
Since the launch of the first Microsoft Kinect in 2010, computer vision has become a cornerstone of interactive experiences. Thanks to these technologies, we can achieve a wide range of applications: people tracking, hand tracking, head and eye tracking, and more.
TouchDesigner integrates seamlessly with almost any computer vision device, such as Kinect, RealSense, Orbbec, ZED, and many others. The data collected can then be used for virtually any purpose.
Machine learning and AI algorithms
Machine learning has opened the door to incredible new forms of interaction. Fortunately, we no longer need to be advanced coders or ML engineers to get started. Today, many applications are achievable with just a few lines of Python.
One example? Using Google’s powerful MediaPipe framework, we can perform real-time body tracking and object detection using just a standard camera.
LiDAR
LiDAR, which stands for Light Detection and Ranging, is a remote sensing technology commonly found in self-driving cars, cleaning robots, and drones.
But LiDAR can also be used in interactive installations. For example, we can map a floor surface and define points or areas where users trigger events inside TouchDesigner. Or, we can track user movement and generate floor projections that react to their presence.
Several LiDAR devices are available on the market. Personally, I’ve worked mainly with Slamtec A1 and A2 devices, and they perform well. You can get the most out of them using the C++ CHOP. Alternatively, you can use Slamtec’s proprietary Framegrabber application to define the trigger surface and send data to TouchDesigner via the TUIO In DAT.
Gesture recognition
With gesture recognition algorithms, we can build interactive experiences that respond to hand gestures. For instance, users can trigger videos just by moving their hands in front of a camera.
There are several ways to achieve gesture recognition in TouchDesigner. Recently, I had the opportunity to experiment with HaGRID (Hand Gesture Recognition Image Dataset), a massive image dataset for hand gesture classification and detection. It’s freely available on GitHub and includes pre-trained models.
Some gesture models are already available, but you can also train your own by downloading the corresponding dataset.
Since the full dataset is 1.5TB, it’s best to select only the gestures you actually need.
To show how it works, I’ve included a Python script* that detects four gestures and assigns a confidence score (or threshold level). The data is sent to TouchDesigner via OSC, where it can trigger video selection or other actions.
Wrap up
From buttons and knobs to advanced machine learning systems, creating interactive experiences in TouchDesigner is easier and more exciting than ever. There are countless ways to explore interaction design within the platform. My humble advice? Find your own path and never stop experimenting. The sky is the limit!
* Script partially developed using AI tools