Rémi Letemple
Rémi Letemple (Senior Research Analyst, Government Insights)
Massimiliano Claps
Max Claps (Research Director, IDC Government Insights)

Governments across Europe, the Middle East, Africa (EMEA) and beyond are busy experimenting with and scaling AI and GenAI (generative artificial intelligence) use cases. The French and U.K. central governments’ GenAI-powered virtual assistant projects — in one case targeted at civil servants and the other at citizen chatbots — show the high level of interest and the early stages of maturity. Also in France, a large language model (LLM) is being introduced to improve the processing of legislative proceedings.

According to IDC EMEA’s 2023 Cross-Industry Survey, the government sector currently has the second-lowest level of adoption of GenAI in comparison to other industries (ahead of only agriculture). But the government sector has the highest percentage of organizations that plan to start investing in it over the next 24 months. Some government entities are taking a more cautious approach, putting restrictions on the use of commercial GenAI platforms, while considering developing their own LLMs.

This phenomenon is not new in the public sector. For several reasons, governments usually have a slower rate of adoption of new technologies.

One is that the public sector is obligated to guarantee access to their services to everyone. Government bodies thus require longer to test innovative technologies in order to deliver inclusive outcomes. Legal requirements can also constrain technology procurement, as can limited capacity and competencies.

The current AI investments are all critical steps toward realizing the benefits of data and AI in government — but they are not sufficient. Beyond operational use cases like virtual assistants, summarizing council meetings, expediting code development and testing for software applications, flagging risks of fraud in procurement and tax collection, and drafting job requisitions, governments need to think of the long-term impacts of AI and GenAI.

They need to think of what will happen when AI is used pervasively across industries and is widely accessible by individuals on their smartphones — when the potential benefits and risks of AI will impact government operations well beyond the current stage of maturity and affect the government’s role in society.

The Potential Impact of AI and GenAI on Future Government Operations and Policy

AI has been used in government — particularly by tax, welfare, public safety, intelligence, and defense agencies — for more than a decade. But the advent of GenAI indicates that existing AI applications only scratch the surface of what’s possible.

Government Operations

From a government operations perspective, AI- and GenAI-powered chatbots are just the beginning. European and United Arab Emirates government officials that we recently spoke with are already thinking about how the next generation of virtual assistants could entirely replace government online forms and portals.

For example, a natural language processing algorithm trained to recognize languages, dialects, and tones of voice could enable citizens to apply for welfare programs, farming grants, business licenses, and more just by sending voice messages.

An AI-powered system combining an automatic speech recognition system and an LLM model would comb through voice messages to identify the entity (individual or business) making the request and the key attributes, then feed the data to an eligibility verification engine. No forms would need to be filled in manually.

This scenario is not too far off. A regional government we spoke with is already collecting voice samples to test such a system for farming grant applications.

But multiple questions are raised. Legal and technical questions like: How and where should voice data be collected and stored to comply with GDPR? How can a citizen’s or business owner’s identity be verified through a voice message in compliance with GDPR and eIDAS? How can the government remain transparent and accountable for its decisions if there is not even a digital front end?

It also raises business and operational questions like: Will such a system really replace online forms — or instead become an additional channel that segments of the population use, thus pushing the volume of requests to a level that causes delays in government responses? Will the pervasive use of GenAI in the private sector multiply that volume effect?

Will lawyers’ pervasive use of GenAI incentivize them to file more proceedings, even ones they do not expect to win, because it is so easy that they may as well try? How will government business, legal, operational, technical, and functional capabilities evolve to cope with these challenges?

Policy

From a policy perspective, the spectrum of open questions is expanding by the day. One of the most critical questions, and one that many are thankfully already asking, is about the impact of AI-powered automation on the job market.

If workers are displaced by AI-powered automation, there is no silver bullet. Training programs are not fast enough and may not work for everybody.

Universal basic income can be part of the recipe. But how much is affordable and what is the right level of income? Will the government need to consider employing more people to cushion a drop in employment in other industries?

If so, are roles requiring both expertise and empathic interactions, such as education, healthcare, and social care, the right public sector domains to do so? If new jobs appear on the market, how does that impact worker social protection policies?

In a year when half of the global population will be asked to cast a vote, the impact of AI on democracy is also called into question. AI is already generating a surge in misinformation and increasing risks of polarized political positions.

What if the attempt of mainstream media to protect copyrights from web crawlers used to feed LLMs unintentionally opens the door for bad actors to make even more misinformation available to train GenAI? Does the government need to establish counter-misinformation authorities or issue laws and guidelines that hold the private sector accountable to do so?

If a government authority is established, how can it ensure public oversight and independence from the already existing cyberunits of defense and intelligence departments, which have a different mission? In France, a recent debate over media independence and balanced journalism might be settled by AI analyzing speeches, attendees, and ensuring pluralism. But who would train a democratic judge of pluralism?

What about the government’s ability to regulate private markets? What if AI and GenAI accelerate medical science through analysis of vast amounts of real-world health data that have been historically hard to collect and prepare for algorithm training? What if, for example, such an acceleration in medical sciences finds a cure that diabetics can use to treat their disease once and for all, instead of having to take medication for the rest of their lives? What would be the impact on the revenue model of pharma companies? Will governments have to change intellectual property rights entirely, to make sure that pharma companies invest in such treatments and make them affordable to all diabetics people around the world?

The same goes for cultural companies and intellectual properties. What would be the role of governments to ensure that culture workers can continue to participate in the entertainment industry and in the creativity and identity of a country through their art?

Finally, what are the ethical implications of using AI in warfare? There are already systems that can alert snipers of targets. What is their impact on the rules of engagement on the battlefield and on the accountability of the individual soldier and the chain of command?

These are big questions that require technology, legal, policy, ethical, and process experts to come together. They cannot be left to the chief information officer or the chief data officer. And they require civil service and policymaking leaders to engage openly with the public, with academic and private sector experts, to avoid the risks of being influenced (or perceived being influenced) only by lobbyists. They require international collaboration. They require measuring the value of AI not just in terms of productivity, but also in terms of fairness, robustness, responsibility, and social value.

Spread the love