Machine Learning (ML) completely transforms the capabilities of TouchDesigner.
In fact, it might be the single most important development for interactive tech and immersive media in years.
Why?
Well, most TouchDesigner developers are used to standard computer vision, which requires you to program very specific rules for every bit of data your installation ingests. If the data varies even slightly, the rules must be reprogrammed, or the installation can break.
With ML, a new relationship between data inputs and their outcomes is created. Instead of programming a set of specific rules, large datasets are fed to your computer that “train” it to understand it’s environment.
For example, without ML it’s almost impossible to get your TouchDesigner installation to recognize faces of people with hats, glasses, or beards, because there are just too many variations to create rules for. But with ML, you can train your installation to recognize what a face is by having it learn from millions of images of people.
And this is just the tip of the iceberg. From realistic landscapes generated using millions of data points, to style transfers that look just like a Monet, to near-limitless skeleton tracking, ML blows open what is possible with TouchDesigner.
But like most things interactive and immersive, it’s not that easy…
- Machine learning is complex. Without a basic understanding of how and why ML works, you’ll always be confused as to why it’s doing what it’s doing. This is doubly true for artists without a technical background, as the subject of ML can easily be the focus of an entire university degree.
- Even once you understand it, creating a functional ML setup can be a pain. Until recently, you needed a specific OS, libraries, programming languages, drivers and frameworks, and graphics cards to make ML work with TouchDesigner.
To solve these problems, I created the training “Machine Learning For TouchDesigner.” In it, I demonstrate how to leverage ML to the fullest using TouchDesigner and a program called Runway ML.
For those unfamiliar, Runway ML is an application that allows you run ML models both locally and even on their own Remote GPU cloud resources that eliminates the need for you to have custom software or hardware to leverage machine learning.
I love Runway ML so much that I’ve been collaborating with the co-founders on educational materials, and they were generous enough to offer anyone who purchases this course a $20 coupon code for remote GPU processing, which equates to over 6.5 hours of processing time.
Here’s exactly what you get in “Machine Learning For TouchDesigner”:
- Learn the fundamentals of Machine Learning, and why it’s different from the computer vision that you’re used to: Understanding how ML works in comparison to old school computer vision is the critical first step in transforming how you view what’s possible for your installations. That’s why I teach this first.
- How to setup Runway ML and their machine learning models: We dive straight into applying ML. You’ll learn how to setup camera inputs, style transfers, object detection, and realistic landscape generation within Runway ML. Runway ML is probably the most flexible machine learning-enabled tool that integrates with TouchDesigner – as well as the easiest to use once it’s set up. But getting them working together can be challenging. Rather than spend hours – or days – getting TouchDesigner and Runway to talk to each other, I teach you how to do it with a few clicks. We’ll also cover Socket.io, the web communication protocol that lets TouchDesigner and Runway talk to each other. I lay out the visual workflow of ML with TouchDesigner. Finally, I demonstrate how to use the MobileNet algorithm to perform object detection in an image.
- Perform near-infinite skeleton tracking using just a webcam: Exactly what it sounds like, you’ll learn how to use Runway ML’s PoseNet algorithm to track multiple skeletons in a single frame, using nothing but a one webcam.
- Create style transfers: Have you seen those images of modern pictures that have been rendered in the style of Van Gogh or Monet? You’re going to learn to create just that, and more. You’ll be able to teach Runway ML to take images from your camera feed and render them in the style of great artists.
- How to export texture data from TouchDesigner to Runway ML: Most apps can’t manage hardware devices as well as TouchDesigner, and in a real world install you’ll be juggling lots of media and hardware that you’ll want processed with machine learning models. That’s why in this section I teach you how to send any of that texture data to Runway ML from TouchDesigner, so TouchDesigner can act as the mother brain of your installations in the same manner in which you’re used to.
- Create insanity by feeding Runway ML models into each other: This part is a lot of fun. I show you how to feed Runway ML algorithmic models into each other to create some truly beautiful (and bizarre) effects. This is where the limitless power of ML starts to become truly apparent.
By the end of “Machine Learning For TouchDesigner”, you’ll have everything you need to leverage machine learning in your TouchDesigner installations immediately.
Prerequisites: To get the most out of this course, you should already have a good fundamental understanding of TouchDesigner, as well as Python and how it is used inside of TouchDesigner. We recommend our TouchDesigner 101 and Python 101 for TouchDesigner courses.