Massimiliano Claps
Max Claps (Research Director, IDC Government Insights)

Generative AI is the buzzword of the day. More specifically, ChatGPT, the OpenAI model that is trained to interact in a “conversational way”.

The dialogue format enables ChatGPT to answer follow-up questions, admit its mistakes and challenge incorrect premises. Of course, like many geeks in the ICT industry and beyond, I have tried it.

It’s quite impressive. Well, besides the fact that it took me a couple of attempts to find the right time of the day when traffic was not so high to cripple access. I asked a couple of questions about my passion, mountaineering and climbing.

The answers were correct, although a bit conservative. For example, when I asked about which multipitch routes I could climb with my level of experience, in Western Canada, the model provided only two options that were exactly in line with my multipitch skills. Instead, I would have appreciated a wider variety of options, some easier and some harder than my skill level, so that I could make a choice.

The model also told me to consult local guides for more information, which indicates that careful ethical principles, like personal safety, are embedded in the design of the algorithm. I then asked about who I should vote for in the upcoming primary to elect the new secretary of the Italian Democratic Party. The answer was that the model can’t express a political opinion, but that it could provide me with the list of candidates.

That’s fair enough, and further proof that ethics are taken into account. So, I asked for the list of candidates and their programmes. The answer was that the model is trained on historical data available until 2021, so it’s not up-to-date on events between 2022 and early 2023. This is understandable, but I would expect it to be quasi real time in the future.

Screenshot with a conversation with ChatGPT

Regardless, fascinating.

Embracing the Augmentative AI Vision

I’ve not done enough research (yet) to say how good the model is and for what use cases. Many of my IDC colleagues are developing thought-leadership research and collecting in-depth data into how generative AI will affect enterprise and consumers.

What I’m thinking about is the societal implications of generative AI. This was triggered yesterday during our first meeting with the 2023 IDC Government Xchange Advisory Board. Gwendolyn Carpenter, a member of the Advisory Board, who has kids in school, said she’d heard about students using it to cheat on their homework.

My colleague Matt Ledger has already written a quick take on this matter too. As Matt noted in his piece, there are a range of opinions on this, from schools that believe ChatGPT can be very valuable as a learning tool, to those that are uncertain about the impact and have temporarily banned usage, to educators that believe that generative AI could make redundant our ability to write, learn and eventually think. This, paradoxically, could hamper our ability to invent brilliant new tools such as ChatGPT itself.

I am no Luddite. I don’t think we should stop progress. But I think generative AI is a great case in point for the ongoing debate about whether we should design AI that can replace human abilities versus AI that can augment human abilities.

For example, I don’t want generative AI to replace my writing, just because it’s much faster and more elegant than I am at synthesising available knowledge. I’m having a lot of fun expressing my opinions in this blog because, in a way, I’m creating it while I write it!

But I would definitely like to have a tool that can critique my writing. A tool that could, for instance, highlight where my piece is biased or where I could consider additional sources of data and literature to enrich my perspective. Sort of a much smarter version of the spell checker that tells me if there’s a typo or if I didn’t use punctuation correctly or if I used too many passive forms. This augmentative AI tool would push my brain to think more, not less. And I’d still be able to make my own choices on whether to apply the advice or not.

Policymakers need to think about how they can shape the new norms to maximise the benefits and tackle the risks of AI. For instance, by recommending (or mandating) a machine-readable label that helps recognise if a piece of content is generated by AI, for example in the case of government-regulated certifications. But regulation is not enough.

If that does not happen, AI will fail to meet the high expectations that it can be a positive force in the future. In fact, according to our Future Enterprise Resilience and Spending Survey (Wave 11, December 2022), only 25% of government executives worldwide think the promise of AI has completely lived up to their organisation’s expectations.

The future of generative AI (and the AI market in general) will depend on whether users and suppliers embrace the human augmentation narrative, in both the B2B and B2C worlds. We need to ask ourselves what kind of AI solutions we want — solutions that replace humans or augment humans. And then design and engineer them in a way that reflects that purpose.

I look forward to discussing more about the power of innovation, and how we can use it at scale to make a positive and ethical impact on society, at our Government Xchange.

Spread the love