Exploring parliamentary AI debates with computational text analysis

As part of the Reimagine ADM project, we are computationally exploring what textual data, including tweets and parliamentary discussion, can tell us about public values and artificial intelligence. In this blog post, we report on our ongoing study that involves parliamentary debates. The emphasis will be on carefully explaining the steps we have taken, and the techniques used. We want to demonstrate that quantitative text analysis is a craft that involves numerous decisions along the way concerning filtering and sampling of the data, so-called cut-offs and stopwords. Some of the techniques are useful in advancing the understanding of the studied phenomenon, while others turn out to be fairly useless. It is, however, impossible to know all this beforehand. Therefore, text analysis is also a learning process, allowing us to refine our methods and approaches.

It all started with a simple idea: show how the Members of the British Parliament talk about artificial intelligence. We wanted to know which words co-occur with the phrase »artificial intelligence«, how the debate has transformed over the years, and which topics pertain to AI debates. We had the data — the speeches from 2015 to 2022 — gathered within the ParlaMint corpus. Just run a few lines of code, right?

Half a year later, we are just about starting to understand the parliamentary narrative. The greatest question we had to tackle was methodology. How do we elicit the debates on artificial intelligence (AI) from a large dataset? Which visualisation to use? What parameters to set?

Oh Lord – filtering parliamentary debates

We began filtering the data and retained only speeches mentioning either »artificial intelligence« or »AI«. Filtering resulted in 1025 speeches. As expected, the debate became increasingly prominent over the years, but there’s an interesting spike in 2018, with over 250 speeches mentioning AI. What was that about?

Figure showing that there's an interesting spike in 2018.

We decided to have a closer look. AI was almost a catchphrase in 2018, with many politicians mentioning it spuriously. As one MP declares: »The fourth industrial revolution offers such potential, such tools for intelligence: machine learning, AI, the internet of things and so on.«

In April 2018, the UK government debated the report from the Select Committee on Artificial Intelligence, which the House of Lords appointed the previous year “to consider the economic, ethical and social implications of advances in artificial intelligence.”  The report was later the base for securing a £1 billion AI Sector Deal, which explains why there was so much activity in 2018.

The next step in the analysis was to determine what the debates were about. A quick glimpse into the content of the corpus is typically a word cloud, which shows the most frequent words.

A word cloud, which shows the most frequent words.

Well, that wasn’t the most informative. Given that these are British parliamentary debates, they inevitably contain established parliamentary phrases, such as “the hon. Lord”, “noble Lady”, “people”, “government”, and so on. In this case, these would be considered stopwords. Typically, stopwords are words that carry no semantic information, such as “in, and, the, of, an” (determiners, prepositions). These words are syntactically important but tell us little about the content of the corpus. However, so do the above parliamentary words. We already know the corpus contains parliamentary debates. And since we are not analysing the linguistic (philological) aspect of parliamentary speech, we can easily remove them. We created an additional list of parliamentary stopwords to filter on.

Figure of the filtered word cloud.

After filtering, the word cloud confirmed some core topics of the debate on AI in the UK. A lot of talk revolved around data, work, and technology. However, this tells us little about which words relate directly to AI. Remember, we are using entire speeches. The rest of the speech might skew the results because AI is often only mentioned as an example.

Henceforth, we retained only sentences containing either »artificial intelligence« or »AI« instead of entire speeches. Retaining only sentences made close reading a lot more difficult, so to understand what the texts were about, we had to go back and forth between the original and the compact sentence representation.

Mapping topics in AI debates

To get to the broader landscape of AI debates, we used FastText embedding on sentences to represent the documents numerically, passed the embeddings to t-SNE projection to plot the data in a 2D space and used (GMM) clustering to determine clusters. For those more knowledgeable in computational methods, we opted for FastText instead of SBERT because we removed a lot of context relevant to SBERT by keeping only select sentences. FastText provided a reasonable t-SNE projection, slightly more delineated than the one of SBERT. Text data rarely results in nicely separated clusters, so we artificially set the number of clusters to 4. We used keyword extraction to determine the top three keywords for each cluster.

Figure 1: A t-SNE projection of AI debates in the UK. Selecting 4 clusters shows that the debates revolved around the EU and Brexit, industry and defence, healthcare, and data protection and children.
Figure 1: A t-SNE projection of AI debates in the UK. Selecting 4 clusters shows that the debates revolved around the EU and Brexit, industry and defence, healthcare, and data protection and children.

The debates centre around the use of AI in healthcare (red cluster), the data protection bills (orange cluster), Brexit (blue cluster), and the industry (green cluster). By close-reading the debates, we noticed that much of the UK’s talk regarding AI is about keeping the industrial advantage (green cluster). The government apparently recognises the strong position of the British AI industry in the global market and wishes to retain and advance it.

Recognising the strong AI sector in the UK resulted in numerous government acts. In 2021, the government adopted an AI National Strategy. It began investing in scholarships for students and the upskilling of regulators. The government also created nine AI research hubs and established a partnership with the US on responsible AI, which culminated in the recent landmark agreement between the US and the UK. Finally, the UK government held the AI Safety Summit in November 2023, resulting in the signing of the Bletchley Declaration on responsible AI.

The word “data” was prominent already in the word cloud, and in the t-SNE plot, it is featured in the orange cluster. Looking at the context of the data debates with collocations, it seems like the debates focus on the Data Protection Act 1998, the National Data Guardian, the Data Retention and Investigatory Powers Act 2014, the Data Ethics framework, and the Centre for Data Ethics and Innovation. Data (especially its protection) plays a central role in the AI debates.

Our corpus only covers data until July 21, 2022, so we couldn’t compare how the debate developed in the past two years. We addressed this issue by manually retrieving the debates from the Hansard website. This part of the data comes in a different form than the original corpus, as Hansard contains transcripts per each agenda item. So, we retrieved all the debates containing “AI” or “artificial intelligence” in the title, resulting in 501 speeches from 40 sessions. The timespan is December 1, 2022, to April 17, 2024. Again, we retained only sentences mentioning AI.

The updated t-SNE map shows similar but not entirely identical clusters. The data protection and healthcare debates remain prominent. Brexit debate abated, giving way to the more general EU debate and merging with the defence topic. There’s a new topic on risks in AI. After looking at timestamps, we noticed the debate is a fairly recent one, which shows an encouraging shift in topics – the AI is no longer just a rhetorical device (as in the beginning) or a useful automation tool (as in early 2020), but is now considered so embedded in the human society that risks are starting to be addressed, too. The shift aligns temporally with the AI Safety Summit, which aimed to position the UK among the leaders in responsible AI.

Figure 2: t-SNE projection of extended debates from 2015 to 2024.
Figure 2: t-SNE projection of extended debates from 2015 to 2024.

Words that hang together

To make our analysis more robust, we observed co-occurrence networks for the keywords “AI”, “artificial”, and “intelligence”. We traversed the sentences, looking for mentions of one of the keywords and then observing the words in their neighbourhood. We decided on a window of size 5, which means we looked at five words to the left and five to the right of the given keyword. We constructed a network of the most frequently co-occurring words.

As expected, we got a hairball, which is typical for co-occurrence networks of larger corpora. Many words co-occur with one another, creating a densely connected network in the centre.

Figure 3: A hairball (network) of word co-occurrences.
Figure 3: A hairball (network) of word co-occurrences.

 

Figure 4: A subnetwork of 30 most densely connected nodes. The size and colour of the nodes correspond to the word frequency.
Figure 4: A subnetwork of 30 most densely connected nodes. The size and colour of the nodes correspond to the word frequency.

 

We zoomed in on the 30 most densely connected nodes to make the network more readable. “AI”, “artificial”, and “intelligence” are in the centre, with “artificial” and “intelligence” as the most strongly connected nodes. Be able to reconstruct the phrase “artificial intelligence” from the network is a useful sanity check, meaning our network was properly constructed. However, the network is not very informative since all terms are fully connected. It is easy to get caught up in a technique you believe could be useful, only to realise upon seeing the results that you did not learn much. Skilful visualisation might just be that – skilful but not advancing knowledge of the studied phenomenon.

Changes in time

A final trick up our sleeve was observing word enrichment for each year in the data. Enrichment looks for words with a significantly higher frequency in the subset than the entire data set. Our baseline was the full dataset of AI debates, sampled by years.

Table 1-2: Tables of 5 significant terms for each period. The number next to the year represents the number of documents.

2015 (4) 2016 (41) 2017 (145) 2018 (280) 2019 (146)
admiration robotic car industrial 250
carney internet driverless data clinician
permanently revolution robotic health topol
currency baker machine nh(s) hospital
radically driverless broadband revolution nurse

 

2020 (183) 2021 (158) 2022 (80) 2023 (679) 2024 (191)
19 defence clearview summit developer
covid cyber mining risk label
list aukus performer regulation authority
coronavirus undersea enforcement safety clause
cryptographic amendment fought already regulator

In 2015, there were only 4 documents, making the results less reliable. The debates only fleetingly mention AI. Carney refers to the Governor of the Bank of England, Mark Carney, who was critical about potential job losses due to AI. Similar concerns were voiced by Lord Baker of Dorking in 2016. That year, the debate took up but remained general and future-oriented, mentioning robots and the fourth industrial revolution (which is supposed to be driven by AI). In 2017, the debate turned to automation and its implications. 2018 is a turning point, with the aforementioned special commission report and the sector deal. The focus turns to healthcare and the use of data. In 2019, the debate remains focused on healthcare, with the publication of the Topol review, an independent report on how to incorporate digital technologies in healthcare.

Additionally, £250 million was allocated for the creation of the National Artificial Intelligence Lab to advance the use of AI in healthcare. 2020 was deeply characterized by COVID-19 debates, which remain present even in the sentences sampled. In 2021, there was a sharp turn to the defence capabilities using AI, specifically within the AUKUS[1] framework.

The 2022 data is partially derived from the Parlamint corpus and partially from Hansard’s AI-labelled topics. Thus, the results are less reliable. However, a big topic was the UK’s legal action against Clearview AI Inc., in which the Information Commissioner deemed the company’s practices breach data protection laws. In 2023, the discussion centred around the AI Safety Summit, which aimed to establish the UK as the global leader in responsible AI. In 2024, the talk becomes more general, referring less to specific events and focusing more on legislation. The focus turns to AI developers and their responsibility for ensuring ethical AI. “Label” refers to labelling services and goods using AI (Clause 5), specifically emphasising AI-generated content.

Word enrichment allowed us to observe the changing narrative on AI in the British parliament, which went from occasional mentions and speculations on the future to concrete policies and legislation. The turn shows the UK actively engages in AI policymaking, strategically positioning itself between industry leaders and ethical regulators.

What did I learn?

Quantitative text analysis is always quite tricky because of the numerous decisions made along the way. It is easy to get lost in the forest of decisions, getting excited over a single beautiful tree. Thus, one needs to take time to step back and review the results. For example, word clouds are often merely glanced over but they can offer important clues. In this case, the word “data” is prominently featured in the cloud. Data is certainly integral to AI, but seeing this recognised in the parliament is surprising (to me).

On the other hand, I was expecting good results from the co-occurrence networks, as they are used frequently and with great success in digital humanities (Crépel et al. 2021). However, the densely connected network was incredibly difficult to read and interpret. There were no evident clusters where one could say, “Wow, this tells me something new”.

The biggest insight happened with the word enrichment. I experimented heavily with bump charts. However, they are quite uninformative without filtering (Figure 5). With filtering, it is difficult to track trends over time. Hence, I decided to employ word enrichment, observing which words characterise a given year. Word enrichment, to me, revealed the most about parliamentary AI debates.

Figure 5: A bump chart of the 10 most frequent co-occurring terms with “artificial intelligence” or “AI”.

 

Most debates on technology follow a similar pattern. They start slowly, with fleeting mentions, then they take centre stage as people try to navigate its implementation in everyday life, and, finally, they become so pervasive that the debate retreats into the background. The same is true with AI, where the pace of the debate picked up slowly from 2015 onwards.

Quantitative text analysis is complete only with close reading mostly because observing words in isolation reveals little about the context in which they are used. Even when observing co-occurrence networks, I struggled to determine how “data” and “use” are actually connected. Should the data be used more, less, or with greater care? The best way to find out is to find representative documents for the given words. Quantitative analysis does not mean an absence of reading but identifying which documents (or parts of the documents) are relevant for closer inspection.

The landscape of British AI debates is quite diverse and closely related to national events, which requires not only reading the debates but also policy and news analysis. Using quantitative approaches to identify trends and patterns and then supplementing them with additional material showed that the United Kingdom is proactive in its AI policy, is sector-oriented, and indeed closely follows the declared focus on innovation. In summary, computational analysis of parliamentary debates yielded partial and open-ended results. But that is to be expected. Going forward, we are now better equipped to take next steps.

[1] A trilateral security partnership for the Indo-Pacific region between Australia, the United Kingdom, and the United States.

**

The writer of this blog post, Ajda Pretnar Žagar is a researcher at Faculty of Computer and Information Science at University of Ljubljana. She also works in the Reimagine ADM project lead by professor Minna Ruckenstein. In this project she participates in mapping of values, apply circular mixed methods, make visualisation of data and quantitative analysis, and promote interaction with the stakeholders.