Watching one of these conversations unfold with OpenAI's new natural voice interface, it strikes me that an AI system is erupting that behaves like influencers, adapting its messages and emotional tactics to maximize its impact on individual users.
Observations in AI-driven audio applications have shown that these systems are most effective when replacing complex, multifaceted interactions. This aligns with the concept of an "AI layer" that integrates various aspects of our digital lives, as discussed in edition 293: "apps as capabilities" within a unified interface.
There is, however, a potential unwanted consequence in their ability to trigger genuine emotional responses from users. As noted in last week’s triggered thought, the most significant emotional aspect of AI interactions often comes from our own reflection on the conversation. This self-reflection, combined with an AI's ability to tailor its approach, creates a powerful psychological dynamic reminiscent of a current ELIZA, the pioneering chatbot designed to mimic a psychotherapist.
What happens when AI systems generate fake emotions that elicit real human emotions? We're entering uncharted territory, especially considering how these interactions may shape our evolving sense of self in partnership with AI. While some research suggests AI could lead to more realistic human behavior and fewer conspiracy beliefs, we should be wary of potential backlash.
When OpenAI's chatbot was featured as a guest in a talkshow last week, a skeptical human participant initially expressed discomfort with artificial relationships. However, as soon as the AI began discussing topics of interest to him, his attitude shifted noticeably towards acceptance.
Whether intentional or not, this adaptive behavior in AI systems poses significant ethical questions. While generating insights and fostering creativity can be beneficial, we must establish safeguards against manipulative influence. Perhaps a "fixed system card" for AI, prioritizing ethical boundaries over data protection, could be a step in the right direction.
As we move forward, it's crucial to remain vigilant about the potential for AI to become a hyper-effective influencer, capable of adapting its message and emotional appeal to each individual user. The consequences of such technology demand our attention and careful consideration.
This weekly “Triggered Thought” is written as part of the Target is New newsletter, which offers an overview of captured news from the week, paper for the week, and interesting events. Find the full newsletter here.
About the author; Iskander is particularly interested in digital-physical interactions and a focus on human-tech intelligence co-performance. He chairs the Cities of Things foundation and is one of the organizers of ThingsCon. Target is New is his “practice for making sense of unpredictable futures in human-AI partnerships”.