There’s a deeply creative process involved in taking any immersive art piece from concept to reality, whether it’s storytelling or the engineering of it. But with the rise of immersive AI and other immersive technologies, artists and creative technologists are transforming that journey by using artificial intelligence as a creative tool.
In this post, we’ll explore how immersive AI and immersive technologies can support every phase of an interactive installation including conception, design, production, and installation.
Artificial Intelligence for Immersive Artists
This 2-hour workshop taught by immersive artist Josue Ibanez demystifies generative AI and shows you how to harness it’s potential in your art practice. You’ll learn how to incorporate AI into your art making process, from generating unique visuals and refining concepts to automating creative tasks.
AI and the Artist: Key Considerations
Here are a few principles I keep in mind when incorporating AI into my process:
AI is a collaborator, not a replacement
Use it to accelerate your process, but know it often only gets you 50–80% of the way there. In the end, the final polish still needs your human hand.
Communicate AI boundaries
Discuss with your team where and how AI will be used, especially if collaborators have ethical or creativity limits around its involvement and impact.
Know your dependencies
AI renders can get you a vision but aren’t blueprints and still need to be built. Make sure you know the gap between concept and feasibility.
Own what you generate
Understand licensing, authorship, and dataset ethics. AI-generated work isn’t always yours by default, so read the fine print.
Check for bias
Artificial intelligence outputs reflect the bias in their training data. Try to understand what your AI model was trained on and check for blind spots, especially when dealing with aesthetics, identities, or representation.
Respect human collaborators
Don’t use AI to replace artists you could otherwise hire. Use it to prototype and save time, then bring others in to refine and elevate the work.
Stay sharp
Don’t let immersive AI play with your creativity or technical skillset. Continue to write, sketch, or code without it to maintain natural fluency in your craft.
Know the cost
Knowing the cost of your research is important. We live on a planet with limited resources, so have intention and try to keep in mind how to use them wisely and effectively.
With these things in mind, here are the phases of a project where I’ve seen or used AI to help in the process.
Conception: More Ideas, Faster
The conception phase is often filled with ambiguity: What’s the story? What will clients/customers feel? What will it look like? AI tools like ChatGPT and Midjourney can dramatically accelerate this early stage.
- ChatGPT is a powerful brainstorming platform. Artists can describe themes and receive back narrative frameworks, interaction ideas, and even lists of verbs to drive engagement (e.g., touch, whisper, rearrange). It’s also useful for envisioning how concepts might come to life on devices like the Apple Vision Pro.
- Midjourney, DALL-E, and other image-generation applications allow you to visualize complex or abstract ideas instantly. Want to see what a warehouse full of bamboo plants growing from the light of machine arms looks like? Generate 20 visual references in a few minutes.
When used in these ways, immersive technology tools can amplify creativity and development. Instead of getting stuck on one idea, you can explore dozens quickly and take control from there.

ABOVE: Midjourney images for a hanging installation proposal.
Design Part 1 : From Sketch to System
Once the concept is in place, immersive AI supports design across visuals, logic, and interaction systems.
- Runway ML offers accessible machine learning models for artists (like background removal, motion tracking, or style transfer) without writing code. This is perfect for pre-visualizing how an installation might look and feel on the full screen. They have free and paid subscription purchases available.
- In TouchDesigner, creative coders can integrate interactive AI models for real-time use. For example, you can use OpenCV or MediaPipe to track people’s movement or Stable Diffusion to generate imagery based on live inputs.
- Tools that convert 2D sketches into 3D assets, giving artists a head start in spatial layout and scenic design.
- AI Auto-Rigging is another amazing tool to help create the pre-visualization for your experience and can be used in the final product. Mixamo has been doing it for a while but there are several other brands that are available for sale too.
Let’s say you want your installation to reflect your painting style. Train a model on your work, and you now have a real-time engine that outputs new content in your aesthetic (like your digital twin). Artists are already doing this for generative projection art, evolving paintings, and engaging visuals that shift with each visitor.

ABOVE: Using Mixamo for Auto-Rigging
Design Part 2: From System to Reality
For creating your budget, spec’ing your gear, and planning for your install, AI like ChatGPT can help speed up your process.
- Helping check voltage needs.
- Helping to build a quick prototype.
- Helping to source the best gear locally, find local partners and services, or find solutions with examples for specific installation requirements.
- Translating language terminologies between Architectural, Engineering, Design, or other specific wheelhouses.
- Creating roadmaps and phases for your projects.
- Drafting up your SOWs.
AI is amazing at helping find the exact lens for a projector you need and where to get it, the power draw of your LEDs, or if you need some consultation to get you started on the needs of your project. Make sure you take the time to review, check the work, and connect with a lawyer to sign off on anything hugely dependent.
Production: Building with AI
Immersive AI can also be part of the production pipeline, both as a development assistant and as a media generator.
- ChatGPT and GitHub Copilot help with writing Python scripts in TouchDesigner, generative shaders, and even logic for microcontroller interactivity.
- AI Data Analysis tools like Google’s BigQuery ML can also help manage large data models to dynamically react or exclude outliers. This can also have other benefits like sourcing large datasets/trends to visualize.
- AI Upscalers and Compressors like those from Topaz Labs can help optimize heavy media assets, especially when targeting embedded systems or playback devices with limited horsepower.
- AI Video Rendering can also add another level, allowing for the content to adapt in real-time to video or prompts from you or your audience.
I’ve had experience using all of these above to some capacity and they come in handy in a pinch to leverage your pipeline, though make sure you thoroughly test the last three and have a strategy before you make it part of your pipeline.
Installation: Real-Time Responsiveness
Artificial intelligence brings installations to life by enabling systems that adapt to users in real time.
- Computer vision like OpenCV and MediaPipe can track bodies, gestures, and movement using only a camera, replacing expensive or clunky sensor setups with smart, scalable software.
- You can use YOLO (You Only Look Once) or MediaPipe inside TouchDesigner to create systems that react to gaze, proximity, or motion.
- Conversational AI can serve as a character, narrator, or guide. In some immersive experiences, visitors can actually talk to an AI embedded in the environment that reacts to speech.
- Support Documentation can be a lengthy thing to put together and framing these to handoff for the onsite crew to know how to run the space can be immensely helpful. AI can help condense the technical jargon down to the person’s level of comfort.
It’s very exciting that the creative technologist’s toolkit now includes text-to-image models, real-time object detection, and conversational agents! When used well, these are amazing ways to expand the experience you’re creating.


ABOVER: Using YOLO for object recognizing masks as part of a puzzle for Otherworld
Final Thoughts
AI won’t make great art on its own, but it can help you get there faster, experiment more freely, and create immersive experiences and events that evolve with each interaction. So whether you’re building a projection-mapped environment that breathes or a generative forest that listens, immersive AI might just be your most flexible collaborator yet, and there’s millions of possibilities out there for now and the future.