The hosts of The Verge podcast published last Friday discuss the state of AI photography and the need for watermarking. They discuss what is real in photography, as it is always a representation; creating a better self. Samsung is a master at bending reality, but now Apple is also entering the field, as expected. There will be a new possibility for creating new differentiating propositions. Should you fill in the background based on suggestions or keep a blurry, unclear item in the background?
I wondered if there is a difference in AI-enhancing and synthesizing reality between people and places, humans and objects. We are kind of used now that our phone “cameras” are creating a synthesized version of ourselves and others in the pictures. With places and things, we might expect and like more reality. However, we are increasingly able to distort and clean up the context. In that sense, is the thing we capture not the reality we like to save for our later memories, but are we staging our scripted play we are part of? The play of our perceived life at that moment.
In the end, cameras are not used primarily to capture reality. Cameras are more like sensors that capture enough data to produce a believable and idealized representation of reality.
This is all amplified by the pressure of social peers and technical FOMO (fear of missing out; we fear not using the technical capabilities provided).
On the other hand, there is a counter-movement. The early generations of digital cameras seem to become popular with Gen Z and below. Force yourself to capture the reality not as real as possible but as honest as possible. Know the limits of the technology. See the flashlight as a flashed image in your picture. And by using the non-connected device, you distance yourself from oversharing. Build in a barrier, a more conscious selection you have to make if you look at the pictures later when importing them on your computer.
It is an interesting development to use analog-feeling digital devices. I'm unsure if it is a temporary hype or a fork in the use of digital technology. To prepare, I dug up my old, tiny Canon Ixus 40.
I make one final connecting leap here. There was an interview with professor Gusz Eiben in de Volkskrant on the missing link in ChatGPT: a body. He thinks ChatGPT is too focused on the “brain” angle of intelligence. However, intelligence is also very embodied; we learn through physical encounters. This is part of the theme and questions at this year’s TH/NGS 2024 on Generative Things. I could not help but connect it to the notions above. Is that “old” digital now part of better understanding our relation with our feelings, the tangible reality?
This weekly “Triggered Thought” is written as part of the Target is New-newsletter, which offers an overview of captured news from the week, paper for the week, and interesting events. Find the full newsletter here.
If you are curious about who is writing, I am Iskander Smit. I am educated as an industrial designer and have worked in digital technology all my life. I am particularly interested in digital-physical interactions and a focus on human-tech intelligence co-performance. I chair the Cities of Things foundation and am the organizer of ThingsCon. Target is New is my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.