Conference: Revaluing Expertise – Troubling Automated Practices and Human Competences, May 15-16, 2024

Time: May 15-16, 2024

Place: Main Building of the University of Helsinki (Fabianinkatu 33)

Keynote speakers: Alison Powell (LSE) & Maja Hojer Bruun (Aarhus University)

Registration:  Sign up here by May 6

Large language models and the introduction of ChatGPT have sparked intense debates about expertise and knowledge work, raising questions such as “Will machines perform the tasks of experts in the future?” or “What are the most critical skills for humans to possess in the midst of AI developments?” These and related questions have an established history that this two-day conference will highlight by exploring forms of expertise and knowledge formation in the context of automation processes and AI. As various scholars have demonstrated, digitally-mediated expertise has disrupted and reorganised work practices in fields ranging from health and law to media and education. Since similar processes are at play across different contexts, we can detect common patterns of adjustment and resistance in how people are forming and protecting their work processes and work identities in the face of the pressure to adopt algorithmic systems.

A widely recognised way to study changes in expertise is to explore them as processes of deskilling and reskilling, identifying qualitative and material reshaping of competence and its valuation. We will depart from this conventional framing and trace emerging skills and capabilities, attuned senses and automated suggestions and convictions to know and do, across different sectors of society. This allows us to pose questions about the broader landscape of automation, and how to combine and rework its practices with human sensory and interpretive competences. We are interested in what is shared, but also what is different across forms of digital expertise and knowledge formation, particularly in response to clashes with existing practices, values, and routines in the workplace and beyond.

One approach to this investigation is to examine how people make sense of developments in AI and algorithmic systems and trouble their professional autonomy and expertise. Alternatively, we can view contemporary algorithmic advancements through the lens of distributed agency between people and machines, inquiring into how types of expertise contribute to the development and deployment of algorithmic systems. By analysing various forms and qualities of agency, we can identify the social and political arrangements that support and hinder alignments between machinic ways of knowing with human sensory and interpretive competences. This allows us to explore how social and political arrangements obstruct the combining of human and machinic forms of expertise, and disregard existing forms of knowledge formation. As we hope to demonstrate in this conference, the study of expertise calls for thinking about the most fruitful methods and conceptual framings for uncovering ongoing developments, acknowledging that knowledge work is thoroughly shaped by the political-economic landscape and emerging technologies.

The conference is sponsored by Re-humanising automated decision-making (Research Council of Finland), Reimagining Public Values in Algorithmic Futures (CHANSE) and Everyday AI (Foundation for Economic Education) – projects led by Professor Minna Ruckenstein.

Programme

The detailed programme of the sessions and more info on the DEDA workshops can be found at the end of the page.

Day 1, May 15

Room: Tekla Hultin (F3003, Fabianinkatu 33, 3rd floor)

08:30 Coffee and light breakfast

09:00 Welcome: Minna Ruckenstein (professor, University of Helsinki)

09:15  Alison Powell (associate professor, LSE): “Deceptive Stories About Technology: The promise (and peril) of “scaling up” through digital health startups.”

This presentation explores how certain ‘deceptive stories’ about technology foreground hegemonic forms of knowledge and expertise. Drawing on the example of digital health startups who promise ‘scale-up’ of digital interventions into the UK’s National Health Service, the lecture describes the mismatches between the promises of digital scaleability and the multi-scalar nature of the NHS. It also reflects on what kinds of epistemic alternatives to deceptive stories of scale might be possible within public sector contexts.

10:05 Session 1: Reevaluating human expertise, creativity and judgement

This session looks at how human experts fit into the gaps and cracks in autonomous systems in often invisible or under-appreciated ways – for example arbitrating or curating machine results or smoothing over conversational awkwardness. These presentations ask how human experts understand their roles in relation to machines, and how these relationships might be renegotiated and reevaluated with the newest crop of generative AI. 

11:35 Break

11:45 Session 2: The quest of human traits and qualities

Emerging forms of automation are rapidly changing and reorganising work practices across the public and private sectors. As automated technologies take on new roles in diverse work settings, they evince ongoing reevaluations between human and machinic skills and capabilities. These presentations discuss some of the qualities that humans, in diverse positions, both desire and need to cultivate within the automation landscape.

13:15 Lunch

14:15 Session 3: Uncertainty as a lens to automation

Automation employs statistical techniques to deal with the uncertainty involved in work practices. At the same time, automation generates new modes of uncertainty, as it pushes against other ways of knowing and doing. In order to understand the implications of this paradox, these presentations analyse the role of uncertainty in the different arrangements of humans and machines which accompany the introduction of automated technologies.

15:45 Coffee

16:00-18:00 Parallel workshops. *Read more about the workshops at the end of the programme.

  1. DEDA Workshop in English (Main Building, U3041, Unioninkatu 34, 3rd floor)
  2. DEDA Workshop in Finnish (Main Building, U3043, Fabianinkatu 33, 3rd floor)

Day 2, May 16

Room: Tekla Hultin (F3003, Fabianinkatu 33, 3rd floor)

08:30 Coffee and light breakfast

09:00 Introduction to the second day

09:15 Maja Hojer Bruun (associate professor, Aarhus University): “Automated Expertise? AI and algorithmic systems as inter-professional work practices”

The talk presents ongoing ethnographic research that compares the development of AI systems among professional practitioners in different domains of work and education. What new forms of expertise emerge in collaborations with and around AI?

10:05 Session 4: Epic disappointments, new tools and innovators

Implementing datafication, or moving from an abstract ideal towards specific arrangements of technical systems in professional fields, is a messy, non-linear and often conflictual process. These presentations focus on the negotiations, reconfigurations and paradoxes which happen in such sites of implementation, especially contrasting the broad-level expectations and imperatives of datafication in specific fields to the diverse, situated knowledges and practices which underlie and sustain them.

11:35 Break

11:45 Session 5: Experimenting and interpreting workflows and interaction

In the social and healthcare sector, characterised by diminishing resources, ageing populations, and shortages of workers, AI technologies are viewed as potential solutions that could increase efficiency, improve decision-making, facilitate customer interactions and streamline processes. Yet despite the prevailing optimistic portrayals of AI in media and political discourses, the reality is far more complex. These presentations discuss how AI applications are, or could be, used to rearrange and experiment with workflows, customer interactions and professional practices.

13:15 Lunch

14:15 Session 6: Knowledge formation, dissent and algorithmic systems

Since LLMs will likely accelerate the production of low-quality information, it is increasingly important that researchers collaborate and develop new skills to expose untruths and document accurate information about powerful state and corporate actors. Presentations in this panel explore methods researchers and experts have developed in order to tackle the challenges of this emerging knowledge sphere and conceptualise resistance to so-called “fake news”, PR and propaganda.

16: 15 Coffee

16:30 Panel discussion: Minna Mustakallio (Head of Responsible Artificial Intelligence, YLE), Oskar Korkman (co-founder, Alice Labs), Heli Rantavuo (Head of Customer Insight, OP Financial Group), and María Teresa Ballestar (Associate Professor, Rey Juan Carlos University).

18:00 Drinks

Programme of the sessions

 

SESSION 1: Reevaluating human expertise, creativity and judgement

May 15, starting at 10:05

Chair: Santeri Räisänen

Discussant: Virpi Kalakoski

This session looks at how human experts fit into the gaps and cracks in autonomous systems in often invisible or under-appreciated ways – for example arbitrating or curating machine results or smoothing over conversational awkwardness. These presentations ask how human experts understand their roles in relation to machines, and how these relationships might be renegotiated and reevaluated with the newest crop of generative AI. 

David Moats revisits historical debates about technologies of decision making in sports, from photo finishes to goal-line technology, to analyse what happens when decisions fall “on the line” or between machine categories and human experts are brought in to settle the matter (as arbiter, caretaker, assistant, negotiator, audience). These examples then beg the question, does anything about these border-line decisions change with less rule-based systems like LLMs, or indeed in higher stakes scenarios like cancer diagnoses and credit scoring? 

Salla-Maaria Laaksonen, Jukka Huhtamäki, Mia Leppälä, Kaisa Lindholm, and Erjon Skenderi use interviews and workshops to test prototypes of AIs designed to facilitate communicative organising. The purpose is to examine what emerges when experts encounter communicative AI in organisational settings. They show that working with a communicative AI reshapes the flow of conversation, necessitating humans to make adjustments to facilitate interactions. In this case, AI may actually help reinforce professional identities and make professionals better aware of their own expertise and its value.

Laura Savolainen discusses preliminary findings from a study on platformed crowdwork within the sciences. Platformed crowdwork is routinely framed as a transitional phase in AI development due to the increasing availability of synthetic data. However, in the longer-term, each wave of technological innovation has created new needs for human labor. Against this backdrop, the case of crowdwork in scientific research can offer important insights, given that science operates at the very frontiers of technological possibilities. The data consists of interviews with researchers across fields such as machine listening, natural language processing (NLP) and medical science, as well as various examples of their crowdwork tasks.

SESSION 2: The quest for human traits and qualities

May 15, starting at 11:45

Chair: Tuukka Lehtiniemi

Discussant: Ilpo Helén

Emerging forms of automation are rapidly changing and reorganising work practices across the public and private sectors. As automated technologies take on new roles in diverse work settings, they evince ongoing reevaluations between human and machinic skills and capabilities. These presentations discuss some of the qualities that humans, in diverse positions, both desire and need to cultivate within the automation landscape.

Kaarina Nikunen, Karoliina Talvitie-Lamberg, and  Sanna Valtonen focus on the ways people discuss knowledge and skills connected to digitalisation and datafication of public services. In terms of skills, research participants made stark contrasts between machines and humans. Whereas machines were considered to perform effectively and never sleep, humans were seen as vulnerable but also possessing qualities of kinship, time-sensitivity and understanding. Overall, they found that longing for these human qualities was connected to the capacity to make comprehensive assessments in the face of specific problems and life situations.

Hertta Vuorenmaa focuses on the reevaluation of expertise and cultivation of skills in response to the pressures of automation in the workplace, finding that top management sees adaptability, communicativeness, and learning orientation as crucial traits for success. Workers need to have and cultivate these traits to thrive and “survive” amidst automation-driven changes and evolving demands. Adaptability is often thought of as a feature of automation but, based on the study, it is expected to be a human trait too.

Matti Laukkarinen discusses tensions between autonomy and algorithmic platform mediation in recruiters’ use of social media platforms to locate and match candidates with employers, analysing the strategies that recruiters employ to maintain their autonomy on professional social media platforms like LinkedIn. The presentation details how algorithmic competences of recruiters supports their decision-making and how this proficiency correlates with recruiters’ ability to implement and maintain responsible hiring practices.

 

SESSION 3: Uncertainty as a lens to automation

15 May, starting at 14:15

Chair: Sanna Vellava

Discussant: Sarah Green

Automation employs statistical techniques to deal with the uncertainty involved in work practices. At the same time, automation generates new modes of uncertainty, as it pushes against other ways of knowing and doing. In order to understand the implications of this paradox, these presentations analyse the role of uncertainty in the different arrangements of humans and machines which accompany the introduction of automated technologies.

Mona Mannevuo discusses post-war dreams of adaptive machines acting as relentless workers who are never fatigued. These imaginaries are played against the current hype around AI, built on the idea that the relationship between humans and machines has finally transformed from humans adapting to machines to machines adapting to humans. Paying attention to machines with lifelike qualities, this presentation underlines new modes of uncertainty around automation. Will they take our jobs? How can we work with them? Who knows what they can do?

Marta Choroszewicz discusses the affective ramifications associated with using data and AI technologies. Ethnographic research on public-sector projects integrating data and AI technologies highlights an amplified sense of uncertainty and self-doubt in one’s own expertise. The presentation focuses on the emergence of these affective states, their implications, and how people might mitigate or work around them. Highlighting how expertise is entwined with affective labour advances our understanding of (in)visible work put into harnessing emerging technologies.

Sonja Trifuljesko examines the role that uncertainty plays in different visions of automation involved in the development of the digital platform for social and health care services in Finland. Algorithmic technologies are seen as means of dealing with uncertainty in service provision, yet at the same time, they are considered as enablers of care practices, in which uncertainty is an integral part. Sonja discusses the implications of these different visions for social and health care organisations, for the professionals, and for the people seeking help.

 

SESSION 4: Epic disappointments, new tools and innovators

May 16, starting at 10:05

Chair: David Moats 

Discussant: Jaakko Taipale

Implementing datafication, or moving from an abstract ideal towards specific arrangements of technical systems in professional fields, is a messy, non-linear and often conflictual process. These presentations focus on the negotiations, reconfigurations and paradoxes which happen in such sites of implementation, especially contrasting the broad-level expectations and imperatives of datafication in specific fields to the diverse, situated knowledges and practices which underlie and sustain them.

Maiju Tanninen and Ilpo Helen discuss the paradoxical nature of the promises of efficiency often associated with datafication. Drawing on the case of a controversial patient data management system used in Finland, this presentation documents how clinical practitioners understand the promises of automation and data-driven services to remain unfulfilled and how the machinic steering actually impedes their work. 

Jongheon Kim and Karine Gauche examine a project developing a low-cost, collaborative agricultural robot. They discuss how the development of the robot prompted a reassessment of the expertise of farmers, agronomists, and robotic engineers – redefining which values are (non-)negotiable. This co-production process involves ontological shifts in the definitions of practical and scientific knowledge, as well as human autonomy and expertise in agricultural labour, within the context of the growing demand for the automatisation of agriculture.

Santeri Räisänen discusses the discontents of the work of “innovation experts”: certain privileged individuals in organisations who construct and disseminate stories of social problems and their technical solutions. Drawing from an example of personalising welfare service provision using AI, the presentation outlines how the innovator’s work of crafting AI-solvable problems becomes itself a problem for more peripheral actors in the network. The problem demands substantial, informal and undocumented workarounds to sustain the appearance of successful AI innovation.

 

SESSION 5: Experimenting and interpreting workflows and interaction

May 16, starting at 11:45

Chair: Laura Savolainen

Discussant: Raul Hakli 

In the social and healthcare sector, characterised by diminishing resources, ageing populations, and shortages of workers, AI technologies are viewed as potential solutions that could increase efficiency, improve decision-making, facilitate customer interactions and streamline processes. Yet despite the prevailing optimistic portrayals of AI in media and political discourses, the reality is far more complex. These presentations discuss how AI applications are, or could be, used to rearrange and experiment with workflows, customer interactions and professional practices.

Dorthe Brogård Kristensen and Perle Møhl focus on the implementation of AI in healthcare, specifically in mammography screening. They analyse the arrangement of human and machinic forms of expertise, the human assistance required by automated systems and the emerging frictions and challenges. Their aim is to piece together different parts of this complex reality, encompassing interpretations of clinical evidence, the skills of human experts, workflows, and institutional decision-making and management processes.

Sanna Kuoppamäki and Lucy McCarren explore the experiences of reciprocity with conversational AI, which is proposed as a solution to provide social companionship for example in therapy and care. Their study employs a participatory design approach to explore how older adults experience reciprocity in dialogues with conversational AI. They will discuss how the context of a conversation can promote or hinder the experience of reciprocity, and the implications of this for care and agency.

Tuukka Lehtiniemi traces the aftermath of pilot trials of a predictive AI tool in child welfare and mental health services. Based on a focus group with caseworkers and nurses involved in piloting the AI tool, this presentation discusses the possibilities and boundaries for AI in sensitive application domains. The pilot trials seem to suggest a need for renewed thinking about AI: professionals have highly context-specific needs and instead of generic predictive solutions, their work requires AI tools that meaningfully slot into existing professional practices.

 

SESSION 6: Knowledge formation, dissent and algorithmic systems

May 16, starting at 14:15

Chair: Sonja Trifuljesko

Discussant: Risto Kunelius 

Since LLMs will likely accelerate the production of low-quality information, it is increasingly important that researchers collaborate and develop new skills to expose untruths and document accurate information about powerful state and corporate actors. Presentations in this panel explore methods researchers and experts have developed in order to tackle the challenges of this emerging knowledge sphere and conceptualise resistance to so-called “fake news”, PR and propaganda.

Kirsikka Grön explores the perceptions of citizens in Hangzhou, on the roles of individuals, collectives, and the state within the platform economy. By employing the ‘market in state’ concept, the presentation looks at a complex interplay between collective action and state involvement, contrasting it with the perceived permanence of digital platforms. The findings advocate for incorporating diverse global perspectives to better understand the local interpretations and political dynamics surrounding big tech companies.

Dušica Ristivojević examines how social media channels are used to uncover knowledge about the actions of large corporations. It is too early to say how this type of knowledge production will be impacted by LLMs, but this presentation will help us to understand the importance of socially engaged experts in this sphere. Though it is very difficult to know what companies are really up to, it is important that researchers team up with experts and activists on the ground.

Joris Veerbeek and Mirko Tobias Schäfer will inquire into whether LLM’s are “polluting the knowledge sphere” through the accelerated dissemination of inferior, inaccurate or factually-wrong content. They explain their efforts to scrutinise the proliferation of AI-generated materials on platforms like Amazon and Spotify and discuss whether and how AI tools could be harnessed to identify and counteract the spread of AI-produced content.

Pekka Tuominen and Alan Medlar have used a LLM-based tool to discern thematically organised content from large sets of textual data in order to cross-culturally query citizen participation across four countries. This work has led them to question the methodological rigour of their disciplines and to adopt a trial and error approach to unpredictable LLM outputs. They argue for the need to develop shared vocabularies, a kind of research pidgin language or “trading zone”, to facilitate transdisciplinary understanding of epistemological and methodological differences.

 

DEDA workshops 

The conference will include two workshops (one in English and one in Finnish) where participants can experiment the Data Ethics Decision Aid (DEDA) – a discussion-based tool developed to examine the values and ethical tensions that may occur in data projects. Both workshops will take place on May 15th at 16:00-18:00. Participation in the workshop requires a bit of prior work, which will take approximately 30 minutes. The instructions will be sent to the participants after the registration has ended (April 30).