The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.
GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.
We observe three ways that industrial users can get in touch with GenAI:
- Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
- Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
- Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.
Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures
Be Ready — But Careful
In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.
Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.
AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.
AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.
Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.
Output Challenges
Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.
GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.
GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.
GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.
Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.
If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.
We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.
Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.
Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.
Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures
Evolving Challenges
Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.
Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.
For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.
Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.
The Sky is the Limit — For Now
In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.
To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.
Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.
Beware the Competition!
GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.
But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.
At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.
As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.