The State of Implementation of Generative AI in Manufacturing

Jan Burian
Jan Burian (Head of IDC Manufacturing Insights EMEA, IDC EMEA)

San Francisco-based OpenAI’s introduction of ChatGPT on November 30, 2022, marked a significant milestone in the development of large language models (LLMs) and generative AI (GenAI) technology. The launch by OpenAI, the creator of the initial GPT series, sparked a race among technology vendors, system providers, consultants, and app builders. These entities immediately recognized the potential of ChatGPT and similar models to revolutionize industry.

2023 saw a surge in efforts to develop GenAI tools that are smarter, more powerful, and less prone to hallucinations. The competition led to an influx of innovative ideas and tools aimed at harnessing the capabilities of LLMs. The goal became to leverage these models as ultimate tools to enhance productivity, competitiveness, and customer experience across diverse sectors.

With ChatGPT paving the way, a broad range of organizations and professionals are exploring how to integrate GenAI into workflows and solutions. The widespread interest and investment have underscored the technology’s transformative potential and laid the groundwork for its continued evolution in the years to come.

4 Uses Cases for GenAI in Manufacturing

In manufacturing organizations, the utilization of GenAI-powered tools and solutions is primarily focused on four key areas:

  1. Content Generation: This includes automated report generation, in which GenAI algorithms are employed to automatically generate reports based on predefined parameters and data inputs.
  2. User Interface Enhancement: This involves the integration of chatbots into user interfaces, enabling more intuitive and interactive communication between users and systems.
  3. Knowledge Management: GenAI facilitates knowledge management by providing co-pilot services that help users access and interpret vast amounts of data and information.
  4. Software and Delivery: This encompasses various applications, such as code generation, in which GenAI is leveraged to automate the creation of software code, streamlining development processes.

According to IDC’s GenAI ARC Survey of 2023, manufacturing organizations are actively evaluating or implementing GenAI solutions.

Around 30% of European respondents have already invested significantly in GenAI, with spending plans established for training, acquiring Gen AI-enhanced software, and consulting. Nearly 20% are doing some initial testing of models and focused proofs of concept, but don’t yet have a spending plan in place.

These results suggest steady growth in the adoption of GenAI-powered tools and solutions within the manufacturing sector. The initial hype surrounding GenAI in 2023, fueled by its perceived potential as a “wonder technology,” has evolved into a pragmatic recognition of its capacity to address ongoing challenges such as workforce shortages, skills gaps, language barriers, data complexity, regulatory compliance, and more.

In the manufacturing industry, GenAI is increasingly viewed as an enabling technology capable of facilitating innovation and overcoming barriers to success.

Framework for Manufacturing Organizations to Implement GenAI

To fully capitalize on the potential of GenAI pilots, manufacturing organizations recognize the need for comprehensive frameworks that encompass processes and policies. Key measures include:

  • Data Sharing and Operations Practices: Organizations should prioritize the implementation of practices that ensure data integrity for LLMs developed internally or in collaboration with third parties. This ensures that data used in GenAI models is accurate, reliable, and ethically sourced.
  • Corporate-Wide Guidelines for Transparency: Guidelines should be established to evaluate transparency and track the use of GenAI code, data, and trained models throughout the organization. This promotes accountability in GenAI usage.
  • Mandatory GenAI Awareness and Acceptable Use Training Programs: Mandatory training programs should be implemented to raise awareness of GenAI capabilities and ethical considerations among designated workforce groups. This helps ensure that employees understand how to responsibly utilize GenAI technologies.

As excitement over the capabilities of GenAI has died down, organizations are becoming increasingly aware of the risks posed by potential intellectual property theft and privacy threats linked to the technology.

To address these concerns, many organizations are prioritizing the establishment or expansion of formal AI governance/ethics/risks councils tasked with overseeing the ethical use of GenAI and mitigating risks associated with privacy, manipulation, bias, security, and transparency.

As a manufacturing interviewee in one of my studies put it, “The governance framework is indispensable in ensuring responsible and ethical AI implementation.” This underscores the importance of implementing robust governance measures to ensure the ethical use of GenAI within manufacturing organizations.

Deployment Strategies

Strategies for selecting the right solution for the right use case can vary substantially. A global white goods company, for example, piloted several GenAI-powered use cases in 2023. Its selection and deployment strategy encompassed a range of approaches, including:

  • Off-the-Shelf Solutions: The company utilized ready-to-use, commercially available GenAI-embedded software-as-a-service solutions. These offered immediate access to GenAI capabilities without the need for extensive development or customization.
  • AI Assistants: It deployed AI assistants to support specific tasks within their business processes. These assistants helped, for example, to create designs based on predetermined workflows, providing valuable support and efficiency gains.
  • AI Agents: The company deployed AI agents in complex use cases requiring the orchestration of workflows and decision-making based on AI-driven insights. The agents leveraged GenAI to analyze data and make informed decisions autonomously.

A primary challenge often mentioned in such endeavors is selecting the optimal LLM for company-specific use cases from a multitude of possibilities. With new models and solutions constantly emerging and becoming accessible, this task can be daunting. The selection process typically involves thorough market research, vendor presentations, and internal discussions about the technology framework underlying current and future use cases.

However, the success of GenAI ultimately hinges on the quality and quantity of the data utilized. Curating a diverse and sufficient data set is critical to ensuring unbiased outcomes and maximizing the effectiveness of GenAI solutions. Data curation therefore remains a cornerstone of success in leveraging GenAI technologies.

The Bottom Line

GenAI-powered technology holds immense potential across industries and regions, offering capabilities that traditional machine learning algorithms or neural networks may struggle to match in terms of breadth and depth. GenAI can assist in co-piloting humans, thereby addressing challenges associated with an aging and/or unqualified workforce.

However, organizations must prioritize addressing concerns such as data leakage, biases, and maintaining sovereignty over IT processes running in the background. These issues must be carefully managed to ensure the responsible and ethical implementation of this powerful technology.


Dilemmas for Software Vendors when Embedding Generative AI into Applications

Bo Lykkegaard
Bo Lykkegaard (Associate VP for Software Research Europe)
Edyta Kosowska
Edyta Kosowska (Program Manager, IDC European Enterprise Applications Program)

The past year and a half has demonstrated the impressive capabilities of generative AI (GenAI) systems, such as ChatGPT, Bard, and Gemini. Business application vendors have since begun a sprint to include the most recently enabled capabilities (summarizing, drafting text, natural language conversation, etc.) into their products. And organizations across industries have started to deploy generative AI to help serve customers — hoping that GenAI-powered chatbots could provide a better customer experience than the failed and largely useless service chatbots of the past.

The results have started to come out, and they are mixed. The service chatbots of organizations such as Air Canada and DPD have made unsubstantiated offers or even rogue poetry. Another customer chatbot for a Nordic insurance company was not updated with the latest website reorganization and kept sending customers to outdated and decommissioned web pages.

The popular Microsoft Copilot hallucinated about recent events and invented occurrences that never happened. Based upon personal experience, a customer meeting summary written by generative AI included a final evaluation of the meeting as “largely unproductive due to technical difficulties and unclear statements” — an assessment not echoed by the human participants.

These issues highlight several dilemmas related to using generative AI in software applications:

  • Autonomous AI functions versus human-supervised AI. Autonomous AI is attractive to customer service departments because of the cost difference between a chatbot and a human customer service agent. This cost saving potential must, however, be balanced against the risk of reputational damage and negative customer experiences as a result of chatbot failures and mishaps.

Instead, designing solutions with “human in the loop” may have multiple benefits. Incorporating employee oversight to guide, validate or enhance performance of AI systems may not only drive outputs accuracy, but also increase adoption of GenAI solutions. For example, a customer service agent could have a range of tools, such as automatically drafted chat and email responses, intelligent knowledge bases, and summarization tools that augment productivity without replacing the human.

  • At what point is company-specific training enough? In other words, extensive training investments into company-specific large language models (LLMs) versus relying on out-of-the-box LLMs, such as ChatGPT, for good-enough answers. In some of the generative AI failures described above, it seems that the company-specific training of the AI engine was too superficial and did not cover enough interaction scenarios.

As a result, the AI engine resorted to its foundational LLM, such as GPT or PaLM, and these did, in some cases, act in unexpected and undesired ways. Organizations are obviously eager not to reinvent the wheel with respect to LLM, but the examples above show that overly reliance upon general LLMs is risky.

  • Keeping the chat experience simple versus allowing the user to report issues. This includes errors, biased information, irrelevant information, offensive language, and incorrect format. To this end, it is crucial to understand sources and training methods. A good software user experience is helped by a clean user interface. In the context of generative AI, think of the prompt input field in an application. Traditional wisdom suggests keeping this very clean. However, what is the user supposed to do in case of errors or other types of unacceptable AI responses, and how is the user supposed to verify sources and methodologies?

This is linked to the need for “explainable AI”, which refers to the concept of designing and developing AI systems in such a way that their decisions and actions can be easily understood, interpreted, and explained by humans.

The need for explainability has arisen because many advanced machine learning models, especially deep neural networks, are often treated as “black boxes” due to their complexity and the lack of transparency in their decision-making processes.

  • Using generative AI for very specific and controlled use cases versus general AI scenarios. One way to potentially curb the risks of AI errors is to frame the use of AI into specific and limited application use cases. One example is a “summarize this” button as part of a specific user experience next to a field with unstructured text. There is a limit to how wrong this can go, as opposed to an all-purpose prompt-based digital assistant.

This is a difficult dilemma simply because of the attractiveness of a general-purpose assistant, which has prompted vendors to announce such general assistants (e.g., Joule from SAP, Einstein Copilot from Salesforce, Oracle Digital Assistant, and the Sage Copilot).

  • Charging customers for generative AI value versus wrapping into existing commercial models. GenAI is known to be expensive in terms of compute costs and manpower needed to orchestrate and supervise training. This begs the question of whether such new costs should be rolled over to the customers.

This is a complex dilemma for a number of reasons. Firstly, AI costs are expected to decline over time as this technology matures. Secondly, AI functionality will be embedded into standard software, which is already paid for by customers.

The embedded nature of many AI application use cases will make it very difficult for vendors to change for incremental, separate new AI functions. Mandatory additional AI-related fees related to existing SaaS solutions are likely to be met by strong objections from customers.

  • Sharing the risk of Generative AI outputs inaccuracy with customers and partners versus letting customers be fully accountable. Generative AI will be increasingly leveraged in supporting key personas’ decision-making processes in organizations. What if it hallucinated and the outputs were misleading? And what if the consequence is a wrong decision that will have serious negative impact on the client organization? Who is going to take the responsibility for the consequences of those actions? Should customers accept this burden alone, or should the accountability be distributed between vendors, their partners (e.g., LLMs), and end customers?

In any case, vendors should have full transparency of their solutions (including clear procedures regarding training, implementing, monitoring, and measuring the accuracy of generative AI models) to be able to immediately provide required information to the customer in the case of an emergency.    

 

After having taken the enterprise technology space by storm, generative AI is likely to progress slower than initial expectations. As a new technology, GenAI might enter the “phase of disillusionment,” to paraphrase colleagues in the analyst industry.

This slowdown will be driven by a more cautious adoption of AI in enterprise software, as new horror stories instill fear of reputational damage in CEOs across industries. We believe that new generative AI rollouts will have more guardrails, more quality assurance, more iterations, and much better feedback loops compared to earlier experiments.


Identity Security 2024: Mapping the Threats and Goals

Mark Child
Mark Child (Associate Research Director, European Security)

The efficient management of identities and access has become central to digital business. It determines the speed and agility with which an organization is able to operate or pursue new goals; it underpins employee productivity and enables operational efficiencies; and it is key to security, privacy, and compliance. Most organizations have deployed identity and access management (IAM) solutions to handle their operational demands effectively.

However, the identity infrastructure and processes themselves are a frequent target of cyberattackers, driving recognition that identity security measures need to be improved.

What Are the Main Identity Threats?

IDC’s Global Identity Management Assessment Survey 2023 found that in Western Europe, the two categories of identity that are perceived as the biggest threats are hybrid or remote employees and partners, suppliers, or affiliates (each category mentioned by 49.6% of respondents). The external nature of these identities — from a location perspective, an employment perspective or both — increases the attack surface of the organization and creates potential vulnerability and exposure of data, systems, and processes.

Nevertheless, those roles also provide access to a broader talent pool and deliver operational efficiencies and economies of scale, allowing organizations to outsource non-core functions. Consequently, organizations are striving to accurately assess and manage the risk.

What Are the Top IAM Investments?

Accordingly, the top two service areas in which Western European organizations are planning to make significant IAM investments to address the security risk are identity management for roles and authorizations (56.9%) and privileged access management (PAM – 53.3%).

Note that since the onset of the COVID-19 pandemic in 2019, investments in PAM have been growing steadily, as organizations required greater control over remote employees accessing sensitive corporate applications and data.

Which IAM Areas Must Improve

The survey also asked which IAM areas organizations need to improve on significantly in the next 18 months. From a list of options including functional, operational, structural, and organizational aspects, the top responses were squarely in the area of identity security:

  • The biggest share of organizations (45.1%) want to improve their ability to detect insider threats.
  • A further 44.3% aim to improve identity threat detection and response (ITDR).
  • 9% aim to improve integration with other IT security solutions.

The emergence of ITDR in the last couple of years as a key priority for organizations building out their security and identity capabilities has been a key takeaway of multiple IDC surveys now.

The final area to touch on is the “wish list” question, always a good barometer of what respondents really value. In this case, if your organization had the budget and resources to do so, what’s the one identity technology solution you’d add or strengthen in the next three months?

The top response was strong authentication, such as two-factor authentication or multifactor authentication (MFA), cited by 25.6%. This was followed by generative AI (GenAI) for fraud detection and identification of synthetic identities (20.3%) and, again, ITDR (19.5%).

The rapid maturing of deep fake tools and capabilities underlined by real-world examples of successful attacks is already driving demand for security tools to protect against them as the GenAI arms race heats up.

Identity really is at the heart of everything in the digital era: business, security, trust, compliance, risk management, operational efficiency, and more. It is fundamental to enterprise initiatives such as building cyber resilience or adopting zero trust principles.

Many direct references to IAM and identity security controls in the growing landscape of EU legislation further emphasize why identity should be high on every organization’s priority list. This new report maps many of the key trends shaping the European identity and access landscape in 2024.


OpenAI - Just the First Stage of the GenAI Rocket?

Neil Ward-Dutton
Neil Ward-Dutton (VP, AI and Intelligent Process Automation European Practices, IDC Europe)
Ewa Zborowska
Ewa Zborowska (Research Director, AI Europe)

When NASA created its Apollo launch vehicles to take payloads to space (including humans), they were designed with multiple segments. The segment nearest the ground on launch (the “first stage”) contained huge rockets and fuel tanks that could get everything into the air and accelerate it to a velocity where it could escape Earth’s gravity. At this point, still some way before the edge of Earth’s atmosphere, the first stage would be jettisoned, to fall back to Earth. The rest of the vehicle would continue on its way, with escape velocity now reached.

A Frenzy of FOMO

OpenAI is the outfit that — above all others — is responsible for the rapid acceleration of interest and investment in generative AI (GenAI) technologies. The launch of ChatGPT in November 2022 kick-started a frenzy of FOMO, first for many individuals (after all, ChatGPT did surpass 1 million users in just five days) and then in businesses — as well as catalyzing conversations about intellectual property in the digital age, potential impacts of AI on employment and skills, and more.

Just over 12 months from the GenAI market launch created primarily by the attractiveness of OpenAI’s consumer services, IDC conducted a worldwide survey that demonstrated the incredible momentum behind the new technology within businesses: in January 2024, 68% of organizations already exploring or working with GenAI said it would have an impact on their business in 2024-2025, and an astounding 29% said that GenAI had already disrupted their business to some extent.

OpenAI continues to benefit from amazing levels of mindshare, thanks to the good old rule of “be first”, but also to the undeniable PR power of its CEO Sam Altman — not least within senior business leadership circles. But mindshare is not enough; it also benefits from a strategic partnership with Microsoft, which has seen Microsoft committing to provide $13 billion of investment, in return for an exclusive license to OpenAI’s IP and an agreement that it would be OpenAI’s exclusive cloud provider.

The heavily promoted downstream results of that partnership (Azure OpenAI Service, use of OpenAI models in CoPilots, and so on) have continued to create mindshare momentum.

And yet: OpenAI is not currently traveling along the route that businesses want to take.

OpenAI’s Alignment Problem

The outfit was founded as a not-for-profit research institute, focused on developing artificial general intelligence (AGI) — a currently hypothetical future level of capability that envisions AI systems that can perform as well or better than humans on a wide range of cognitive tasks — with a capped profit company subsidiary (which is the entity invested in by Microsoft and others).

However, when we ask organizations what they need from GenAI in order to create business value from the technology, they typically cite qualities such as accuracy, privacy, security and frugality. For example: 28% of organizations are concerned that GenAI jeopardizes control of data and intellectual property; 26% are concerned that GenAI use will expose them to brand or regulatory risks; and 19% of respondents are concerned about the accuracy or potential toxicity in the output of GenAI models.

OpenAI is innovating fast, but the dominant innovation focus is on breadth and depth of functionality (e.g., the introduction of “multimodal” models that can manipulate multiple content types, including text, images, sound, and video). Not on accuracy, privacy, security, frugality, and so on.

Currently, it is vendors “higher up the stack” (enterprise application and enterprise software platform vendors) who are attempting to bridge the gap with functionality aimed at addressing trust issues and minimizing risks. But it is clear that foundation model providers also need to bear some responsibility for… being responsible.

Beyond OpenAI: An Explosion of GenAI Model Providers

OpenAI might have amazing mindshare right now, but it is already far from the only source of GenAI model innovation. Fueled by venture capital and corporate investment, competitors have flooded into the space, including:

  • GenAI research-focused vendors like Anthropic, AI21, and Cohere
  • Hyperscale public cloud providers AWS and Google
  • Enterprise technology platform vendors including IBM, Oracle, ServiceNow, and Adobe
  • Sovereignty-focused providers, including Mistral, Aleph Alpha, Qwen, and Yi
  • Industry-specialized providers, including Harvey (insurance) and OpenEvidence (medicine)
  • A vibrant and fast-growing open-source model community, with thousands of GenAI-related projects hosted by Hugging Face and GitHub

Open-source communities are a particularly energetic vector of innovation: open-source projects are quickly evolving model capabilities in terms of model size and efficiency, training and inferencing cost, explainability, and more.

Microsoft Is Clearly Looking Beyond OpenAI

In late February, Microsoft President Brad Smith published a blog post announcing Microsoft’s new “AI Access Principles”.

There’s a lot of detail in the post, but underpinning it all is a clear direction: in order to reinforce its credentials as a “good actor” in the technology industry and minimize the risks of interventions by industry regulators around the world, Microsoft is committing to support an open AI (no pun intended) ecosystem across the full AI technology stack (from datacenter power and connectivity and infrastructure hardware to services for developers). As part of this, it is increasingly emphasizing the importance of a variety of different model providers. For instance, it’s made a recent small investment in France’s Mistral AI and is expanding support for models from providers like Cohere, Meta, NVIDIA, and Hugging Face in its platform.

Will OpenAI Fly or Crash?

In order for OpenAI to reap significant rewards from business demand for GenAI technology implementation, it is going to have to evolve its approach. While the initial success of ChatGPT captured market attention, the rapidly evolving landscape of both GenAI technology supply and demand requires a stronger business focus. OpenAI is faced with tension between its research-oriented ethos and the market’s demand for practical AI applications. This alignment problem raises questions about its identity and future strategy.

Lastly — what about Microsoft? It must back its new principles with tangible actions that genuinely advance AI responsibly. It needs to ensure transparency and avoid actions that would suggest it only uses “responsible AI” as a PR tool for driving profits. It needs to promote both innovation and competition. Nobody wants a world where one model’s dominance could stifle competition and limit options for developers.

Hence, fostering an open and inclusive ecosystem where smaller players can grow will be imperative for Microsoft’s credibility and allow for a trustworthy AI ecosystem benefiting everyone.

 

Want to know more? Join IDC’s experts on the 19th of the March from across EMEA for an exclusive peek into our latest research to:

  • Uncover real-world use cases from organizations aiming to maximize positive impact of GenAI on their business,
  • Learn about evolving GenAI technology, supplier dynamics, and the shifting regulatory landscape,
  • Gain actionable insights to reveal a roadmap to get through GenAI possibilities and challenges in 2024 and beyond.

Register for the webcast here: How EMEA Organizations Will Deliver Business Impact With GenAI – Beyond the Hype.