The Interactive & Immersive HQ

How to Setup Face Tracking in TouchDesigner

Face tracking in TouchDesigner is one of the most exciting new features to come out recently. This is thanks to the implementation of the NVIDIA RTX face tracking features included in the Maxine SDK. Let’s dive into setting this up!

Setting up face tracking in TouchDesigner

Setting up face tracking is actually quite easy in TouchDesigner! We’re going to take advantage of the new Face Track CHOP, which connects to all of the NVIDIA backend that come with any of the RTX cards from the last few years. We can start by creating one in an empty project. Then we need to connect a video source to it. This video source will likely be a webcam/camera feed but you could also feed it movie files or anything that might have a person’s face moving in it. To connect the video feed, we can drag and drop the TOP to the TOP parameter of the Face Track CHOP. This looks like this:

touchdesigner face tracking

With just those two steps we’ve already started to see some tracking happening! We can see the bounding box information of the face has 4 channels by default, which are the UV position (or the XY position on screen) of the bounding box and then the normalized width and height.

Enabling further tracking

There’s also more information we can get about the face tracking. Firstly we can enable the additional facial rotation channels on the Face Track CHOP. This is great because it gives us a 3D orientation of our face:

touchdesigner face tracking

We can enable even further detail in the tracking by enabling the individual landmarks of the face, such as the eye position, nose position, etc. To do this, we can decide if we want 68 or 126 landmarks. In most cases, I recommend using 68 landmarks, as it’s more than enough information without feeling overwhelming. This looks like this when activated:

touchdesigner face tracking

One problem you’ll notice immediately is that all of the landmark channels are only numbered! This makes it very difficult to figure out which points correspond to which parts of your face. Luckily, we can reference the NVIDIA Maxine documentation to pin point everything. You can use the link below and the diagram below to identify the 68 landmarks:

touchdesigner face tracking
NVIDIA Facial Tracking Landmarks

https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/

Using Face Tracking Data in TouchDesigner

Ok, so you’ve got the data into TouchDesigner, you’re capturing it, but what can you do with it? Without going to deep into a rabbit hole, let’s make a quick example of drawing a bounding box around the users face.

The first thing we’ll do is disable the facial landmarks, as we don’t need those and they make the Face Track CHOP channels harder to drag and drop:

touchdesigner face tracking

Next, we’re going to composite a Rectangle TOP over our Video Device In TOP by plugging the Video Device In TOP into the input of a Rectangle TOP. This automatically sets the resolution of our Rectangle TOP and composites the rectangle over top.

touchdesigner face tracking

Now we can start working with our data channels. In this case, when the Rectangle TOP has it’s center parameter at 0,0 it is in the center of the screen. That means that -0.5 and 0.5 are the left/right and top/bottom edges. If we look at our face tracking data, we can see that NVIDIA’s setup uses 0.5,0.5 as the center position. This is an easy fix for us! We’ll use Select CHOP to isolate the UV channels first to make them easier to work with:

touchdesigner face tracking

Then we’ll plug that into a Math CHOP so we can rescale or re-range our values. This is easy to do using the Range page of parameters on the Math CHOP. We know our incoming value range from NVIDIA is from 0 – 1 (because 0.5 was the center) and our range in TouchDesigner should be from -0.5 to 0.5 (because 0 was the center). We can enter these values right into the Math CHOP:

touchdesigner face tracking

Great! So we can put a Null CHOP after this and create drag and drop references from our UV positions to our Rectangle TOP’s center positions:

touchdesigner face tracking

Amazing! We can already see the box is tracking our face now. So the only step left is to have it resize dynamically to our face size. The process for this is very similar to what we just did for the UV positions. We’ll start by creating a new Select CHOP to isolate the width and height channels:

touchdesigner face tracking

In this case, since our width and height are normalized, we can even go ahead and directly reference these to our Rectangle TOP’s size parameters without further processing. The only thing we’ll change before we create the reference is the way the Size parameter is currently setup, it does normalization by aspect ratio. In most cases, other data sources won’t do this so we can switch the A to an F on the right side of the parameter:

touchdesigner face tracking

So now, we have truly normalized coordinates to work with. We can create a Null CHOP and do the same process of drag and drop references:

touchdesigner face tracking

The final thing left is some styling! You can decide how you want to style your bounding box. In my case, I’ll make it a green rectangular outline by changing the Rectangle TOP parameters to the following:

touchdesigner face tracking

This will give you the following result:

touchdesigner face tracking

Get Our 7 Core TouchDesigner Templates, FREE

We’re making our 7 core project file templates available – for free.

These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.

They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.

Further TouchDesigner face tracking examples

AR filters are also a great use of facial tracking data. In the video below, we cover how to create AR tracking sunglasses using the face tracking setup we created here:

Wrap up

Whether you’re new to TouchDesigner or a seasoned pro, being able to integrate face tracking in TouchDesigner is exciting. It’s easy to setup and use which makes it accessible to everyone. Because it’s accelerated fully by the GPU this is definitely the go-to way to perform face tracking now over OpenCV. Enjoy!