As an emerging technology, Generative AI (GenAI) is negotiable and unfinished, requiring further work. Adjusting it to everyday uses requires adopting new ways of working and ongoing experimentation. GenAI will do well because we humans make it do well. Not only do we accept AI as a teammate, but we are willing to work for it to become successful in this role.
In our Reimagine ADM project, we shift the spotlight from AI itself to the human and material relations around it. We are interested in socio-technical processes that involve value negotiations, meaning-making, and learning. While exploring the future landscapes of AI, it is essential to move beyond a narrow focus on ‘the AI thing’ that leads us to repetitive questioning of whether AI reigns or is a threat. AI is not outside of our cognition or world-making. We are not only data, as John Cheney-Lippold (2017) suggested, but our actions are present in algorithms. So why consider GenAI detached from us humans?
Strengthening co-evolution
While AI employs machinic logic, human cognition is flexible and adaptive, effortlessly adjusting to subtle changes in the broader environment. For instance, consider how easy it is for humans to switch from focusing on a complex numerical task to engaging in a casual conversation, or how we integrate embodied knowledge and visual information to steer ourselves in a crowded room. With the most recent GenAI qualities, however, we see processes where AI is becoming enmeshed in complex everyday tasks. GenAI helps in idea generation, and in learning new skills. The technology does not think, but it puts sentences together in a way that appears to be thinking. By doing so, GenAI mimics thinking that has been done by humans over centuries, so it is no wonder that it often appears to be human-like or reasoning. GenAI connects us to a very human body of knowledge, while deepening the collaboration between humans and machines.
The coming together of humans and machines takes place in many ways, and not without tensions and value struggles. Algorithms shape our behavior as we shape them through mutual co-evolving (Kristensen and Ruckenstein, 2018). The co-evolving refers to the dynamic and reciprocal relationship where human capabilities and AI technologies develop together. In the past decade, social media has served as a laboratory for observing human-machine co-evolution. When people began posting on social media, they did not know what type of content resonates with their audience, or the algorithmic setup. Gradually, they learned which posts receive more likes and shares. The platform’s algorithms responded to this engagement and shaped it into a more affective and polarizing direction that serves the interests of the platform. This kind of interdependence, which is very difficult to predict, underlines the importance of attending to co-evolutionary processes. We need to be more proactive in questioning how technologies become participants in the formation of social ties.
A social fabric that supports AI
“Tell me what you’ve learned about me ChatGPT” stories are circulated on social media detailing the personality assessments that ChatGPT has done based on what people have revealed about themselves through their questions and responses. Journalists tell us that if addressed politely, ChatGPT delivers better answers. Tech experts talk about AI as a teammate and give it visualization tasks that they would not master. Humans are embedding GenAI in social and cultural contexts. When they interact with new services, people are feeding them information about sensory perceptions and sharing cultural cues and experiences. AI benefits from the cultural attention that it is given. This new social fabric enriches both AI and human culture, enabling AI to offer responses that adhere to cultural norms, values, and emotional subtleties.
AI collaborations are never neutral. There is always the possibility that the co-evolving process is undermining our autonomy. The talk about addictive qualities of algorithmically-boosted services is a way to say that these services are distracting us from what we think is important in life. While a felt loss of autonomy creates distance, the anticipatory and pleasurable engagements with AI continue to strengthen the co-evolving of humans and technological companions. The more excited and engaged we are, and share our lives with algorithmic systems, the more algorithms resemble us, as they generate and package information that is fed to them. Laura Savolainen asked in her thesis (2023) Who is the algorithm? Now we need to pose the question: Who is GenAI?
Invisible work for AI
In our Reimagine ADM project, we are interested in this kind of mundane co-evolution of humans and AI. The questions that we ask as part of the work that we do at the University of Helsinki, for instance, concern the invisible and emotional work done for AI to flourish. The invisible work is strategically pushed aside when AI is introduced as an autonomous and technically evolving force. Yet, we think that it merits careful attention.
While many everyday tasks are geared toward supporting machines, it is no coincidence that we can observe a deep sense of loss that accompanies these developments. The ‘feel’ of technology relations offers an opportunity to think of the politics and practices involved (Ruckenstein, 2023). There is a great deal of sadness in the fact that carefully authored works of art and scholarship are downgraded to training data. Professional work is being devalued. Radiologists fear that there will be no doctors who want to specialize in radiology because AI is said to take care of the image detection. Ironically, we might end up having too few radiologists, because AI alone will not do the job. AI dominates future trajectories and realms of the everyday. People need to be interested in emerging technologies, even if they would rather be doing something else.
Humans must keep reminding themselves that it is on their backs that these developments are built. The future of AI is not just about the technology but about the relationship we cultivate with our new AI companions. We need to question these companions if they promise too much or threaten what we think is valuable in life, whether it is our time, work, professional beliefs, or desired societal trajectories.
Written by Minna Ruckenstein
References
Cheney-Lippold, J (2017) We are data. New York: NYU Press.
Ruckenstein, M (2023) The feel of algorithms. Berkeley: University of California Press.