AI and the AV Experience
My passion is experiential technology. I love designing spaces that create the emotional impact necessary to cement a call to action. As such, experience design is equal parts technology and behavioral psychology. Great experiences take people from “Wow! That’s cool.” to “Whoa! I need to do something about this.”
Knowing the way people in general respond to the environment and absorb information, there are some concepts in experience design that typically work. We talk about immersion, stacking senses, and agency as three of the core ways we can create actionable insight and buy in. Although these broad strokes can get us most of the way there, the truth is that each individual will interpret the experience differently based on their cultural background, age, personal experiences, and neurodiversity.
The challenge has always been deciding how we take the general principles of experience design to create an experience that is wide enough to reach all of these individuals, while still being impactful and coherent to the audience as a whole. Today however, artificial intelligence (AI) may be the unlock to creating variation within the larger environment that allows for personalized experiences.
I have to quickly point out that I have somewhat of a bias against AI. I believe the term AI has been drastically overused and exaggerated to describe a plethora of technologies that are not really “intelligent”. Many systems that use the AI moniker are just utilizing a preset algorithm and logic to execute a set of predefined tasks. A camera system that senses the location of sound and then frames a face based on facial geometry is an awesome addition to a collaboration space, but at its core, it’s not really intelligent. A control system that reads an NFC tag in your phone to activate a profile is personal, but not intelligent. A system that learns your behaviors over time to predict your needs is machine learning, but it’s not intelligent. I wanted to point all this out because I have a fairly high bar for what AI entails, and I believe that we’re on the path with glimpses of it, but we’re not there yet.
If you don’t believe me, look at all the recent stories about OpenAI and Chat GPT creating reports for legal cases that are completely fabricated or take a look at the AI generated imagery from the latest Canadian political ads below.
A person viewing this quickly realizes there’s a third arm wearing a different shirt; however the AI generator obviously doesn’t reflect upon its art, it just creates it.
“Knowledge is knowing that a tomato is a fruit; wisdom is not putting it in a fruit salad.” -Miles Kington
Intelligence is a mixture of knowledge and wisdom. Thinking requires logic, but expression requires something more. It requires knowledge of how the audience will respond to what is being expressed. To go a step further, it has to anticipate how an individual will react in order to decide what they need to see, hear, and feel.
Go back to the camera system in a meeting that shows the audience the active speaker. It’s making a decision for the audience about what’s important for them to see, but in real life, how often do you just stare at a speaker for several minutes? People will typically look around the room and judge others attention or watch the reaction of the CEO as someone else delivers a piece of information to determine how it was received. The AI we have today would miss these nuances, just showing you who is talking based on a pre-determined algorithm.
So hard logic falls short. In fact, as one of my favorite authors, Dan Ariely, has written, people are Predictably Irrational. In experience design, this is always in the back of my mind.
AI will be an increasingly important part of creating personalized AV experiences and it has a long way to go to get there. AI generative art is beautiful and is never the same twice. AI generated soundscapes escape the boundaries of pattern recognition and listener fatigue to create calm and sense of place. Real-time location services and personal profiles can curate language and content on an individual level experience by experience.
Technology is creating its own neural network of sensors, algorithms, microphones, speakers, screens, and cameras to increasingly get better at making logical decisions. The next step though is the reflective phase. Can AI see its product objectively? Can it adjust in real time to see that the decision it made logically is not actually having the intended outcome and then alter course to recapture the audience?
All experiences are interpreted through the soft, squishy matter of a human being. The promise of AI is to reach each one of them individually, and that will take more than logic.
Editor’s Note: This blog is part of a series for Artificial Intelligence (AI) Appreciation Day, which is held annually on July 16. Click here to read more AI Day stories from LAVNCH [CODE] and click here to read more AI Day stories from rAVe [PUBS].