When you start to build installations or art pieces that require more than one computer or more than one process running, sync quickly becomes a headache. There are so many different ways to create “sync” and even the word “sync” can mean so many different things to different people. In this post, I’m going to talk about the different kinds of sync that applies to TouchDesigner and most other interactive systems.
Kinds of sync
The TouchDesigner wiki does a great job summarizing some of the common types of sync you’ll encounter in the wild. These include:
- No sync
- Weak software sync
- Medium software sync
- Tight software sync
- Hardware sync
No sync is obvious, that’s if you just have 2 projects running and they don’t really communicate or stay in sync. I’m sure you’ve all experienced “hitting play on both computers at the same time” styles of sync, and those fall under no sync. Before we breakdown the rest of them, let’s discuss the difference between hardware sync and software sync.
Hardware vs software
The difference between hardware sync and software sync is small but important in the big picture of things. It’s important to remember that we work in pipelines, and these pipelines are often complex and have many pieces. One thing that is tricky with the idea of sync (which ties into stutter-free playback and tear-free playback) is that every stage in the pipeline needs to be equipped for sync. It does no good if I have a tight software sync and hardware sync on my machine, then send out 4x SDI outputs that travel through different signal paths and converters that also don’t have a sync signal being generated for them. At that point, I might have full software and hardware sync as frames leave my computer, but there’s still a chance that to see non-perfect playback on my displays.
Now zooming back in the content itself, software sync refers to synchronizing the generation and frame rendering of content across multiple computers or multiple processes on the same computer. This is related fully to the actual applications themselves moving forward through time together in different levels of sync and has nothing to do with outputs or displays. You could hypothetically have a project that needs tight software sync to keep TouchDesigner and Notch and Unreal locked in time, but absolutely no hardware sync since there’s only 1 display. This is why we separate how we think about these things.
The opposite is also true! There are projects out there that require hardware sync but no software sync. This is because hardware sync is only concerned with frames as they are transmitted from your software and start to leave outputs or get sent to displays. So if you had 1 single TouchDesigner application open on a system but it was outputting 16x 4k signals out of 4x GPUs in the system, then you’ll need hardware sync but no software sync.
Weak, medium, or tight software sync
Omg why so many versions?
Now that you know the difference between software and hardware sync we should explore different levels of software sync. But why are there three different versions of sync? Can’t we just flip the “sync switch” and either have everything fully in sync or not? I wish! The reality of the situation is that perfect synchronization requires a lot of extra development and introduces it’s own kind of overhead in your projects. There are also many situations where perfect content sync isn’t even required to create a consistent experience. If your screens aren’t exactly edge-to-edge or if your content is more ethereal and slow moving, you may not even be able to notice if the content went out of sync by a few frames. So it’s always a good idea to find the least amount of sync required so that you can get the benefits of sync without all of the extra hassle.
Weak sync is only really barely a step up from no sync. Weak sync is mostly around triggers being sent across machines and using protocols that will ensure messages are delivered. In weak sync, the messages may or may not receive the data at the same time and nothing is really guaranteed other than the fact that the systems will receive the message at some point. Weak sync would be things like:
- Having a 0 to 1 signal that leaves one system and arrives at the other systems to trigger movie files to play at the same time
- Sending performance data and control signals between different rendering machines live
There’s no real guarantee of sync here, you’re just hoping signals are reaching the destination around the same time. Like we mentioned earlier, if your displays aren’t particularly close together or if your media isn’t particularly long, your systems may or may really see that much time drift between them and it could work just fine.
Medium sync is what most people think of when it comes to sync. This revolves around using the fastest protocol you can (usually OSC or UDP-based protocols) to send frame numbers, time code, or timeline information, or any other similar “driving” control signal. Whereas in weak sync you would send a pulse to start the playing of 2 movie files in different systems. In medium sync, the master system is sending a frame counter that tells every render system exactly what frame it needs to be on at this exact moment in the video. For many types of installations, this is more than enough sync and when you’re collaborating with other apps (TouchDesigner to Notch to Unreal to etc) this is usually the type of sync you’ll be using. Medium levels of sync will often be within 1-5 frames of synchronization between the systems. These differences usually occur if one system drops a frame and the others don’t or if the control signals encounter some network hiccup along the way and are delayed. It’s also important to remember (as we’ll see in a moment) that this doesn’t actually synchronize the underlying engine of your app. Your control system could be at the start of its frame rendering, while one render system is in the middle of it’s engine’s rendering and the other could be at the end of its frame. These kind of discrepancies between the underlying engine of your app attribute to this 1-5 frames of offset.
This is the pinnacle of software sync. This will be difficult to do between different applications because it really is at the engine level of sync. In TouchDesigner, we can essentially synchronize the individual steps of the underlying TouchDesigner engine. If one app drops a frame, all of the other apps in tight sync wait for it so that they are always marching perfectly in sync through their rendered frames. It also ensures that data is quickly and reliably received by all systems so they’re acting in lock-step to controls. Most high end applications will have some form of syncing multiple instances or computers together, but it’s very rare that you’ll be able to create engine-level sync between TouchDesigner and Notch and Unreal. You’ll find high-end TouchDesigner pros will use this on large projection walls/video screen arrays with multiple systems driving them.
There are many kinds of sync and we haven’t even dived into how to implement any of these. In part two of this series we’ll talk about hardware sync and how to actually implement these different kinds of software sync, but without this fundamental knowledge, none of those will make sense. Really trying to visualize synchronization as a spectrum as opposed to an on-off switch is the most important thing you can take away from all this, and that’s what will set you up to make better decisions in our next blog post. Sync is a tricky topic! There’s no way around it but hopefully this helps demystify the concepts for you!