The concept of AI as agents instead of tools is gaining more attention. Some are wondering if we are not creating boring agents doing uninspiring tasks. It is not a new concept and has been seen as an important trend for some time. In this week of busy work schedules, I took a shot to dig up some earlier articles via good old Google and add these to NotebookLM (the successful tool, developing fast). Google found editions 235, 237, 281, 282, 297, and 298. I asked it to make a “Reflecting on the sources, what would be the not-so-boring agent be like?”. When I fed this to Lex / Claude, to rewrite, it lost my personality and quickly became almost cheezy. There is however also the option in NotebookLM to create a briefing doc based on the sources. Feeding that into Lex led to something that felt much better to me:
The integration of AI is reshaping our relationship with technology, blurring the lines between tools and partners. This shift raises critical questions about agency, creativity, and human autonomy in an AI-augmented world.
Emerging concepts like "generative agents" and "agentic kits" point towards a future where AI systems exhibit increasingly human-like behaviors and decision-making capabilities. This evolution prompts concerns about the potential for manipulation and the erosion of genuine human interaction.
However, these developments also offer opportunities for enhanced collaboration and creativity. The proposed "kit economy" envisions a symbiotic exchange between producers and users, where AI becomes an active participant in the creative process rather than a mere tool.
As we navigate this changing landscape, we must carefully consider the balance between AI assistance and human agency. How can we design AI systems that enhance our capabilities without compromising our autonomy? What safeguards are needed to prevent emotional lock-in and maintain the authenticity of human relationships?
The future of human-AI partnerships holds immense potential, but it requires thoughtful design and ethical considerations to ensure that AI remains a tool for human empowerment rather than replacement.
So, this little shortcut for the newsletter taught me that the baseline can be best made through rather functional conversations. I am sure that with more training (and time investment), I would have been able to create a more compelling piece that I would also share. Maybe doing that experiment some other time!
This weekly “Triggered Thought” is written as part of the Target is New newsletter, which offers an overview of captured news from the week, paper for the week, and interesting events. Find the full newsletter here.
About the author; Iskander is particularly interested in digital-physical interactions and a focus on human-tech intelligence co-performance. He chairs the Cities of Things foundation and is one of the organizers of ThingsCon. Target is New is his “practice for making sense of unpredictable futures in human-AI partnerships”.