Since the release of the large language model ChatGPT and image generation tools like Stable Diffusion, Midjourney, and DALL-E over the past few years, the interest and hype around tools that use technology generally termed “AI” or “Machine Learning” has grown exponentially. More and more companies are introducing tools that use these technologies for an ever-expanding range of applications, including writing tools, photo editing, text-to-speech, and more. In this article, we’ll take a look at several recently released and upcoming machine learning tools that might be of interest to interactive developers, including several focused around coding and visual effects.
Machine Learning Tools for Coding
Cost: $10/month or $100/year for individuals and $19 per user for businesses
Although it’s been around for a little while (initially being released in October 2021), GitHub Copilot is one of the more prominent coding assistance tools. Copilot is powered by OpenAI’s Codex, a modified form of the GPT3 large language model that is trained to generate code from natural language input. Copilot is available as a plugin for a number of popular text editors and IDEs, including Visual Studio, VS Code, Neovim and Jetbrains IDEs. It features a number of assistive tools for programming, including suggesting code based on comments and translating code between programming languages. Like many of the tools we’ll look at that deal with programming, the results aren’t always perfect, but for simple use cases it can be helpful.
Cost: Free, OpenRAIL-M License
StarCoder is a new large language model code generation tool released by BigCode (a collaboration between Hugging Face and ServiceNow), which provides a free alternative to GitHub’s Copilot and other similar code-focused platforms. It features a royalty-free license, allowing users to freely modify and redistribute the model, and was trained on over 80 programming languages. Like GitHub’s Copilot, it can be utilized within IDEs like VS Code via a plugin, but can also be run on its own to generate code from text.
Phind: AI Search Engine
Unlike the other two developer-focused tools we’ve looked at, Phind is not a plugin for IDEs and text editors, but instead is a web-based search engine focused on developers and technical questions. It provides answers to questions with detailed descriptions of code and coding examples, and unlike other tools, pulls its information from sources on the web and is always up to date. In that way, it’s much more like a “smart” search engine for finding information. And since it’s free, requiring no setup, download or even account, it looks to be a great resource for developers, regardless of platform.
Upcoming Machine Learning Tools for Visual Content
Cost: Included in Adobe subscription, ~$54.99 per month
Adobe Firefly is a recently released beta version of what will eventually be a family of creative generative AI tools, that are a part of its larger Sensei project (which has other AI tools focused around marketing and analytics). At this point, Firefly includes an image generation tool similar to what we’ve seen from competitor platforms like Stable Diffusion, which generates high quality images from text prompts, as well as tools for vector re-coloring and generating text effects. But as seen on the webpage above, Adobe has some other very interesting tools in development for the Firefly family, including context aware image editing and text-based video editing, both of which seem like they could be useful when making project proposals and mock ups.
Get Our 7 Core TouchDesigner Templates, FREE
We’re making our 7 core project file templates available – for free.
These templates shed light into the most useful and sometimes obtuse features of TouchDesigner.
They’re designed to be immediately applicable for the complete TouchDesigner beginner, while also providing inspiration for the advanced user.
NVIDIA Graphics Research at SIGGRAPH 2023
NVIDIA is gearing up to show some very impressive generative graphics tools at the 2023 SIGGRAPH conference, which is being held in Los Angeles from 6-10 August. NVIDIA will be presenting around 20 papers showcasing this research, which includes physics models that simulate realistic 3D elements, tools to generate 3D objects from still images, neural rendering models, and more. The example in the image above shows a technique for simulating tens of thousands of hairs in real time using a neural physics model.
Continuing in the world of 3D, another paper describes new AI data compression techniques that are said to “[decrease] by 100x the memory needed to represent volumetric data — like smoke, fire, clouds and water.”¹ It’ll be exciting to see whether these sorts of improvements will be implemented into the NVIDIA Flow tool set, and how they will impact the performance of the simulations.
These are only a handful of AI/machine learning tools that are out there, but as you can see even from this small sample size, there are a lot of different ways that the technology is being utilized. Besides the already available tools for coding, it’s especially exciting to see the impact that machine learning models will have on real-time graphics generation. Hopefully this post has given you some ideas for new machine learning tools to check out, as well as brought some attention to a couple of upcoming tools to keep on your radar!