The final thing we’re going to make in our TouchDesigner Beginner Course is going to use a lot of SOPs, which are the 3D surface operators.
We’re going to do something cool where you’ll have your microphone being analyzed and driving the amplitude of some noise, so the louder you talk, the more noise there is, and the quieter you talk, the less noise there is.
Then, we’re going to add a few effects to that, and we’ll talk about how to create a rendering setup, as well as how to material a 3D object.
It’s important to remember that with SOPs are procedural, so everything happens in small steps.
The first thing we’re going to do is make a Sphere SOP.
Then we’re going to do a classic TouchDesigner trick which is to put a Noise SOP after the Sphere, which gives you a crazy looking blob.
If you middle click on the Sphere, you’ll see it has 722 points. What the Noise SOP does is it takes a noise value for every point, and every frame applies them to that point. You’ll notice that the parameters of the Noise SOP are very similar to the parameters of the Noise CHOP.
We can compound this with another similar trick too. Create a second Noise SOP, and change the attribute parameter to “Point Diffuse Colour”, which essentially gives random colours to all the point attributes.
Now, add a trusty Null at the end of your chain, then go to your OP Create Dialogue and add an Audio Device In CHOP. This’ll give you the incoming audio from your microphone.
What we’re going to do with this CHOP is analyze the audio and then do what we’ve always been doing which is referencing it onto a parameter.
In my experience, the best way to approach things like this especially when using signals from the outside world is to use something called a Trail CHOP. Trail is very helpful because it allows you to plug in other operators and it shows you their change in signal over time. Add in a Trail CHOP to your network.
Next, add in an Analyze CHOP, which takes the audio signal from the Audio Device In that has a fast sample rate and tons of little samples inside of it and gives us one channel at the sample rate.
Connect your Audio In to your Analyze and your Analyze to your Trail.
You should be able to see that if you keep talking, the value stays high, but if you go quiet for a minute, it relaxes.
In your Analyze CHOP, change the function parameter to “Maximum”, so it gets the peaks of you talking instead, which I think is a better driver for the effect that we’re going to use.
Make another Null CHOP and connect it to your Analyze CHOP.
Then go back to your first Noise SOP, click it so you can see it’s parameters, then take the Null you just made and drag and drop it onto the amplitude parameter to reference it.
Now as you speak, the sphere gets a little bit of noise. However, in my opinion, it doesn’t feel like it’s doing enough.
So, we’ll do what we always do, which is range it. Add in a Math CHOP between the Analyze and the Null.
Change the “from range” to zero to 0.2, and the “to range” to zero to one.
Plug the Math CHOP into the Trail CHOP and you’ll you’re getting much higher values from your signal. And if you go back down to the Noise SOP again, you should see that every time you speak, you’re getting a lot more amplitude out of the Noise SOP.
Right now, the Noise looks like it’s really energetic and frenetic, so maybe you want to smooth it out more.
After your Math CHOP, add a Lag CHOP. Change the upwards lag parameter to zero and the downwards lag parameter to 0.3.
Plug the Lag into your Trail CHOP and you’ll still have the sharp attacks but the ramp downs are a lot smoother. As you’re speaking, there’s still a few peaks, but in general it spikes up and then has a nice ramp down.
Your Noise SOP as well should seem a bit more natural and matches what’s going on with your voice better, it’s not as hectic and chaotic now.
Currently, there is no content in our SOPs that we can drag and drop onto my screen, we actually have to render it down to 2D.
We have to render it into a TOP format because screens have pixels on them and the pixels have values and we need to give that screen pixel data a 2D texture to display.
We have to take this 3D thing that we have going on and turn it into a 2D texture that we can then plug into a screen.
First thing we need to do is grab a Render TOP here. For this setup, we need a camera and lights (lights aren’t mandatory but for this case, we’ll use them because it looks nicer).
Go to the 3D objects section of the COMPs page of the OP Create Dialogue and add a camera. When you drop it in, you’ll see it automatically gets referenced by the render because the render by default is looking for a camera named cam1 (and coincidentally, by default if we make a new camera, it’s going to be called cam1).
Now we need to tell the Render TOP what geometry we want to render. For this, we need to use another one of our COMP helpers, called a Geometry COMP.
The Geometry COMP is basically the transition from SOPs being on the CPU, to moving that data to the GPU.
The way you should think about it is that SOPs are where you actually work on your 3D geometry, and then when it’s time to come to render them, you want to put the final SOP that you have inside of a Geometry COMP and mark it to be rendered.
So, we have our Null that we want to be rendered, and we’ve made our Geometry COMP. What you have to do next is go inside of the Geometry COMP (remember the “i” keyboard shortcut), and delete the default torus shape that’s in there.
Then (still inside the Geometry COMP), create an In SOP.
Go up a level, and plug your Null into the Geometry, then go inside of it again, and you’ll see your the geometry from that Null is in there.
Turn on the display flag (on the bottom right of the In SOP), then go back up a level and you’ll see that now it’s being displayed in the viewers of the camera and the geometry (and once we make a light, it’ll be displayed there too).
That display flag tells the other COMPs that they can now visibly display this geometry, and that’s usually a go-to, you should always turn it on.
On the In SOP, you also have a flag for rendering (the button beside the display flag). This flag tells the Geometry COMP that this one SOP is going to be what is rendered.
You can have multiple things inside of a Geometry with their render flags on, but usually it’s best to just keep it one per Geometry, so that your network’s cleaner and more organized.
So what’s happening is our SOP work is passing into the Geometry COMP, it’s getting marked for display and render which then feed it behind the scenes from the SOP level into the Geometry, which then takes it into the GPU side. From there, the Render is able to look at it and rasterize it, and turn it into a 2D texture.
The only thing we’re missing for this is a light. So go to the COMPs page of the OP Create Dialogue again and create a light.
All of the default settings for camera, geometry, light and render will usually put your object in the middle of the screen, but you can get creative and play around with the parameters to change it up how you like.
We now have a basic rendering setup where you work on your SOPs, you send it into a Geometry, then inside of the Geometry, you have to mark the SOP you want to be rendered and displayed by activating the two flags. You make your rendering setup of a camera, geometry, light, and render, and then you can output this (and we’ll talk about outputting content in the next lesson).
The final thing we want to look at is an important part of 3D workflows and working with SOPs, and that’s texturing. This is where the operator family of MATs come in. SOPs do our 3D work, and MATs are in charge of adding materials to our 3D.
The ones that you’re probably going to be using most often will be Constant MAT (which is an easy way to give things constant colour), or a Phong MAT (like a constant but it has light and shadows and those kinds of things).
For this example, we’re going to add a Phong MAT. The material gets applied to the Geometry COMP and it’s at that point where it gets textured.
Phong has so many parameters, but you don’t really need to worry about a lot of them for simple use cases.
To apply this Phong to the Geometry, go to the render page of the Geometry, and you’ll see a parameter called material. Take the Phong and drag and drop it right onto that material parameter.
Let’s say we wanted to take a movie file and apply it to the Geometry. Make a Movie File In TOP. Change the content to something that’s a little easier to see (I chose the cloudyocean.tif from the TouchDesigner sample folder). As always, put a Null afterwards.
Then go back to the Phong and find the colour map parameter. This is basically how you take any 2D texture whether it’s a movie, an image, or something you’re generating (any TOP texture) and apply it to the Geometry as a colour.
Take the Null you just created and drag and drop it onto colour map parameter. Now your sphere shape has that cloudy texture mapped onto it.
There’s one final trick that I’m going to teach you here: there is a slightly easier way to do one of the steps involved with the Geometry COMP.
When we did it in our example, we created one, we went inside, we made our In SOP, and turned on the flags.
But there’s actually a really handy way you can do this in just a couple of clicks. If you have your final Null SOP of your operations, you can hover your mouse over the output of that Null, right click on that output, go to the COMP page, click Geometry and drop it in.
This will automate a bunch of the steps that we previously did. It’ll create a Geometry COMP, puts the In-SOP in there, (it also makes an Out SOP which is not very important, but it makes one), and it sets the correct display and render flags for you so that it will appear inside of a Render.
Then you can go ahead and apply your material and you’d end up in the same place.
We’ve now covered all of the operator families and worked with them on small projects.
In our last tutorial, we’re going to look at how to output content so that whatever you’re making, you can display it on screens or projectors.
I possess a deep knowledge in many professional fields from creative arts, technology development, to business strategy, having created solutions for Google, IBM, Armani, and more. I'm the co-founder of The Interactive & Immersive HQ. Yo!
Building off of previous Python workshops, this class aims to demystify a few of the elements often used when doing advanced Python development work in TouchDesigner. From using storage to writing your own extensions we’ll work through the several concepts that will help you better leverage Python in TouchDesigner for installations and events. From the conceptual to the concrete, by the end of the workshop you will have both worked with abstract concepts in the textport and created a functioning tool for saving presets.
Matthew Ragan
We all know user interfaces in TouchDesigner are hard. If you’ve taken our Perfect User Interfaces training you’ll know all the ins-and-outs of creating your own user interface elements from scratch. But what if you need a UI made quickly? What if you want to skip building your own UI pieces? Widgets to the rescue! Widgets are the new and powerful way to make user interfaces quickly and easily in TouchDesigner. What they lack currently in their customization, they make up for in speed of deployment and out-of-the-box features that are easy to access through their custom parameters. Combined with new features to TouchDesigner such as bindings, creating quick, scaling, and aesthetically-pleasing user interfaces is a breeze. .
Everyone has seen pictures of TouchDesigner projects with hundreds of operators and wires all over the place. Impressive, right?
No! In fact, the opposite is true. If your projects look like this, you’re seriously hampering your TouchDesigner installations – and your potential to consistently get high-profile gigs:
If you want to create large-scale installations or consistently work on projects in a professional capacity, you need a project architecture that is clean, organized, and easy to use.
The best project architectures – those used by the pros – are so streamlined that they make programming TouchDesigner look boring.
I share how to do this in my training, “TouchDesigner Project Architectures for Professionals.”
In “TouchDesigner Project Architectures for Professionals”, I give you my exact project architecture system – the same system that’s made it possible for me to create installations for Nike, Google, Kanye West, Armani, TIFF, VISA, AMEX, IBM, and more.
With my project architecture system at your disposal, you will:
We accomplish this through my 3 core project architecture concepts:
I’ve spent over 8 years refining my project architecture into an easy-to-implement, repeatable system that any designer can use. Once you learn my system, you’ll be able to take on projects you didn’t think you were capable of. You will also have the confidence you need to land better gigs and meet challenging client demands with flexibility and ease.
Want to level-up your TouchDesigner skills and create projects that can intelligently make content and generative decisions using weather and climate data?
How about installations that span forty-story high-rises that use Twitter posts to prompt generative designs?
Big clients – with big budgets – demand a level of immersion deeper than the use of Microsoft Kinect and Leap Motion interaction. They want to integrate social media, custom web apps and their own CMS to create interactive installations that bring people together in a way they haven’t experienced before.
In short, they want to use technology to become part of the broader conversation.
Fortunately for us, we’re able to deliver this level of immersion by integrating external data sources into our TouchDesigner projects.
The catch? Bringing external APIs into TouchDesigner can be challenging:
That’s why I created my latest training, “Join the Broader Conversation: How to Use External Data and APIs in Your TouchDesigner Installations”. Made for the complete Python beginner, the training provides you with everything you need to begin integrating external data sources with your TouchDesigner projects.
When you’re done you’ll be able to charge more and secure bigger projects than you would previously.
In this 1.5 hour video training (which includes example project files), we will:
Without any guidance, I’ve found that learning to integrate external data natively into TouchDesigner takes new designers between 20-40 hours – and that’s not including the trial and error phase that comes with implementing these concepts for the first time. Many people quit out of frustration.
Want to avoid spending $50,000+ on the wrong computer hardware?
Or having to look your client in the eye and say “I don’t know” when they ask why their shiny new immersive media installation looks like a stuttering, jaggy hot mess?
Then you need this training.
When I first started working with TouchDesigner in 2011, I thought the most valuable skill I had to offer was my ability to code beautiful interactive and immersive media projects for my clients.
While this IS important, I quickly realized that that what my clients valued most was my ability to create an installation that performed perfectly – no tearing, stuttering, judder, or any other issues. If you think this sounds easy, you haven’t been working with Touch long enough.
This is one of the reasons my clients pay me $1,500 per day.
When I first started, I encountered all the issues mentioned above. I overcame them with a combination of all-nighters, hiring the right (and expensive) experts, and in some cases, luck. I also wasted a lot of time and money.
With experience, I was able to preemptively solve for all these performance issues.
That’s why I created the “Creating Flawless Installations with TouchDesigner” training. Now you can benefit from my 7+ years of experience without having to make the costly mistakes I did.
After this training, you will have the confidence you need to deploy immersive design and interactive technology installations for big brands who pay top dollar for your skills. And you’ll be one of the select few individuals in this industry that know how to do what I do with TouchDesigner.
In this 1.5 hour video training (which includes example project files), we will cover:
Want to create large-scale video arrays and real-time LED facades that span high rises?
How about installations that use GPU particle systems, volumetric lighting, and multi-composite operators?
As lots of you know, this is all possible with TouchDesigner – sort of.
Out-of-the-box TouchDesigner is great when you’re just starting out. But as your interactive installations grow larger and your clients begin to want more generative and technical content, there are several challenges that arise and the cracks begin to show.
Problems typically fall into two broad categories:
When problems due to scale such as these inevitably occur, the standard TouchDesigner functionality and nodes only gets you so far. And it doesn’t take very long before you have to explain to your client that you’re unable to deliver what they’re asking for.
Lucky for us, we can leverage the code that powers a lot of TouchDesigner to create installations of virtually unlimited scale and technical possibility.
We do this by learning how to program GLSL Shaders. GLSL is the programming language on which many of the features of TouchDesigner are created even now.
When you understand how to apply GLSL to TouchDesigner, you’re effectively turning on “God Mode.”
That’s why I created my training, “Turn on God Mode in TouchDesigner with GLSL Shaders.” In it, I cover the following concepts:
TouchDesigner is the leading platform for interactive media and immersive design, and is used to create the world’s largest installations. Elburz Sorkhabi explores and explains concepts in TouchDesigner revolving around network optimization and performance bottlenecks.
The user interface (UI) is an integral part of any TouchDesigner installation.
Most clients want dynamic installations that they can control as needed, without consulting a designer or programmer for every change. This is usually through a control panel and UI they can access.
Even more important are user-facing UIs – think interactive panels, turntable additions for live shows, and customizable remote controls. This is what many clients have in mind when they decide to contract someone to design an interactive installation.
But if UIs are so central to TouchDesigner installations, why is it so hard to make them not suck? Most UIs slow down installations and break when you try and resize a component or add multiple pages. They’re also ugly.
So as always, I’m fixing the problem by providing a training.
In my latest 2-hour training, you will learn how to:
A great TouchDesigner installation needs a great user interface. Get the training you need to provide professional UI for top clients today.
Elburz deep dives on all the the inner workings of Python in TouchDesigner. This introductory course takes you from the very beginning of your Python journey and explains concepts that will create a powerful foundation for all your Python scripting in TouchDesigner.
Elburz deep dives on all the the inner workings of Python in TouchDesigner. This introductory course takes you from the very beginning of your Python journey and explains concepts that will create a powerful foundation for all your Python scripting in TouchDesigner.
Ever wonder how TouchDesigner pros work so fast? Ever see a friend or colleague do something and think “How did you do that??” Elburz puts together the top tips and tricks that everyone needs to know when working with TouchDesigner. Speed up your workflows and explore undocumented features across both the application and each operator family.
Want to level up your TouchDesigner skills and create dynamic 3D installations with interactive elements that can scale from single to multi-touch and virtual reality – all without changing anything about your setup?
Are you still trying to use 2D interactive hotspots and invisible UIs in your 3D TouchDesigner installations?
If this sounds like you, I’ve got good news and bad news.
The good news is that you’re not alone – this is how most designers start out (even some experts get away with it). It actually works okay if your 3D installations are static and the interactions are simple.
The bad news is you’re going to miss out on rich, dynamic and complex 3D projects. Anyone who has tried to create dynamic interactive 3D elements using invisible 2D UI hotspots to trigger interactivity has seen this firsthand.
Fortunately, TouchDesigner lets us use render picking to integrate 3D interactivity directly into our projects:
But render picking isn’t easy. It requires unintuitive Python scripting techniques. And to implement effectively, render picking assumes a deep understanding of TouchDesigner and the connection between instancing and multichannel manipulation of data.
It’s with this in mind that I created the “How to Create Multi-Touch 3D Installations Using Render Picking” training. In this training, I teach how you how to use Python to build native 3D interactivity directly into your 3D TouchDesigner installations.
In this training, you will learn:
The best is that I’m offering “How to Create Multi-Touch 3D Installations Using Render Picking” for $125.
Note: this training is the same content as the previous “3D Interactivity with Render Picking” training, but it has been upgraded and re-recorded. If you already bought that one, you already have access to this new one!.
Learning TouchDesigner can be difficult for anyone, no matter what background you have. With all the new terminology, hundreds of operators, and unique paradigm, new users can become overwhelmed and paralyzed. In this training, I take you on a 3 hour deep dive of TouchDesigner’s basic features, fundamentals, and walk you through small example projects to introduce you to the operator families. This course sets you up to take on any of the intermediate trainings available.
Everyone always complains about the wiki. It’s hard to use, that’s a fact. What about all those hidden tutorials? And how about gigs? Where are those? Blogs and videos, where can I find those? For the first time ever, this training compiles all the TouchDesigner resources available and guides you not just in finding them, but also how to find future resources.
Want to create TouchDesigner installations with objects that interact with each other, human participants, and the environment? How about 3D scenes with objects that respond to natural forces?
Whether you’re interested in the above or are just tired of your TouchDesigner projects looking like a video game from the early 90s, the answer is Physics.
Physics is the key to unlocking a new level of realism and natural interactions in your TouchDesigner installations. Put plainly, it brings a new level of immersive fidelity and consistency to interactive installations.
But getting physics right in TouchDesigner is an uphill battle:
You can spend days, weeks, or even months trying to learn this stuff. Or, you can gain an understanding of the fundamentals in just over 2 hours with my latest training, “Physics Fundamentals: Use Physics Like A Pro in TouchDesigner.”
In “Physics Fundamentals” I give you everything you need to start leveraging physics to create interactive and immersive TouchDesigner projects of the highest caliber.
In “Physics Fundamentals,” you get:
When you’ve finished “Physics Fundamentals,” you’ll be able to add physics – one of the most in-demand TouchDesigner skills – to your interactive and immersive media repertoire.
How many times have you been on a gig and been screwed over because you didn’t have a contract in place? How often have you wished you could properly negotiate or knew the finer points of what you were actually signing? In this workshop, you’ll learn about the most common types of contracts, what all the sections mean, and how you can change them based on your requirements. The included templates give you a great reference whether you’re just getting your career started or if you’re a seasoned pro and want to review your own contracts.
Everyone has had a client ask them to make something cool with a Kinect 2. Where do you being? What can you make? Will it be hard or easy? How do we combine the Kinect info with regular TouchDesigner work that we have to do. In this workshop, I introduce you to the fundamentals of using the Kinect 2. This includes initial setup, using the invaluable Kinect Studio 2.0, demonstrating the common uses of Kinect 2 in TouchDesigner, and then talking through many of the common hardware pitfalls when using a Kinect 2 for a project.
Have you ever used high-density geometry and models in TouchDesigner to create the visually jaw-dropping interactive installations of your dreams, but come up short? You’re not alone. Creating fully functional 3D installations that look amazing and let users interact with them in real time is a major sticking point along most folk’s TouchDesigner journey.
In fact, I’d say that it’s nearly impossible to get right without knowledge of one tool: GLSL.
When you understand how to apply GLSL, you can create 3D installations on a massive scale, work with high density point clouds using sensors, and manipulate complex geometry in real time for truly interactive, large-scale immersive projects.
Over the course of 1.5 hours, this is exactly what you’ll learn in “God Mode in 3D: GLSL For 3D TouchDesigner Installations”. Through real-world examples and instruction, I give you the tools you need to begin working with GLSL in 3D today.
Note: We touched on the basics of GLSL with a focus on 2D work in a previous training, “Turn On God Mode in TouchDesigner With GLSL Shaders”. If you’re new to GLSL, I highly recommend you click here to get that course, and view it before viewing “God Mode in 3D”.
Here’s exactly what you’ll learn in “God Mode in 3D”:
For many, leveraging GLSL in TouchDesigner is the most critical step towards becoming a professional TouchDesigner developer. It’s one of those skills that separates the amateurs from the pros.
Fortunately, it’s a skill that can be acquired relatively easy with practice and the guidance provided in “God Mode in 3D”.
SOPs are tough, there’s no getting around that, but they aren’t impossible. What most people lack is a fundamental understanding of how SOPs work and the data structure that drives them. With this knowledge in hand, it’s possible to do great things with SOPs. In this training, Elburz takes you from SOP ground-zero through to making some SOP data visualizations and particle systems with attractors and metaball forces.
Getting gigs can be hard. Even something as simple as figuring out the budget can be a challenge. Whether your new to the industry or a seasoned pro, there are many factors to consider in pricing. To add even more on your plate, once you have a price, you still have to put together a nice presentation to pitch the project. In this workshop, I take you through the common process of pricing and pitching a gig to win it as quickly as possible.
Machine Learning (ML) completely transforms the capabilities of TouchDesigner.
In fact, it might be the single most important development for interactive tech and immersive media in years.
Why?
Well, most TouchDesigner developers are used to standard computer vision, which requires you to program very specific rules for every bit of data your installation ingests. If the data varies even slightly, the rules must be reprogrammed, or the installation can break.
With ML, a new relationship between data inputs and their outcomes is created. Instead of programming a set of specific rules, large datasets are fed to your computer that “train” it to understand it’s environment.
For example, without ML it’s almost impossible to get your TouchDesigner installation to recognize faces of people with hats, glasses, or beards, because there are just too many variations to create rules for. But with ML, you can train your installation to recognize what a face is by having it learn from millions of images of people.
And this is just the tip of the iceberg. From realistic landscapes generated using millions of data points, to style transfers that look just like a Monet, to near-limitless skeleton tracking, ML blows open what is possible with TouchDesigner.
But like most things interactive and immersive, it’s not that easy…
To solve these problems, I created the training “Machine Learning For TouchDesigner.” In it, I demonstrate how to leverage ML to the fullest using TouchDesigner and a program called Runway ML.
For those unfamiliar, Runway ML is an application that allows you run ML models both locally and even on their own Remote GPU cloud resources that eliminates the need for you to have custom software or hardware to leverage machine learning.
I love Runway ML so much that I’ve been collaborating with the co-founders on educational materials, and they were generous enough to offer anyone who purchases this course a $20 coupon code for remote GPU processing, which equates to over 6.5 hours of processing time.
Here’s exactly what you get in “Machine Learning For TouchDesigner”:
By the end of “Machine Learning For TouchDesigner”, you’ll have everything you need to leverage machine learning in your TouchDesigner installations immediately.
Prerequisites: To get the most out of this course, you should already have a good fundamental understanding of TouchDesigner, as well as Python and how it is used inside of TouchDesigner. We recommend our TouchDesigner 101 and Python 101 for TouchDesigner courses.