5 Key Questions on Generative AI from Tech Investors in Europe
Unless you’ve been living under a rock for the past six months, you’ll have heard of generative AI – technology that enables computers to create synthetic data or digital content based on previously created data or content. The launch of ChatGPT in late 2022 lit a fire under this emerging space and seemingly overnight, hundreds of millions of people became inspired by the results of work that had already been going on for years within academic and commercial technology vendor research departments.
Earlier in June we spent two days touring around investment banks and hedge funds in London to talk to investors about generative AI and answer their questions.
Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures
We had many great, in-depth discussions. Here are the questions that came up most frequently.
Where is the Value in Generative AI in the Short, Medium, and Long Term?
Today, most of the value is being captured by hardware vendors – most notably NVIDIA, which has seen its share price take off following a sharp upswing in its predicted revenues. As the market leading provider of GPUs with a strong enabling software story and emerging as-a-service play, too, NVIDIA is very well positioned to capitalise on the generative AI boom.
Of course, NVIDIA isn’t the only vendor that potentially stands to benefit; AMD and other semiconductor vendors (including start-ups like Graphcore, Cerebras & Moore Threads) are emerging as challengers, and generative AI platforms will drive storage and networking infrastructure investments too.
In the short to medium term, hyperscale public cloud providers can also expect to benefit significantly. With its early move investing in OpenAI and accelerated investments in generative AI across its software portfolio, Microsoft is in a particularly strong position; but AWS, Google, and Oracle are all also making significant moves in this space.
In the medium-term platform and application vendors also stand to benefit, although the value equation for them is less clear cut. There are significant question marks over which generative AI use cases can support direct monetization, and which will be important to implement from a defensive point of view. Many of the costs associated with managing generative AI models for scale, security, privacy and trust will also fall on their shoulders.
What Will Have to Be True to Make GenAI a Truly Broadly Adopted Technology?
Right now, we’re still in “year zero” for generative AI in a commercial context. There is still a lot of confusion around the technology and its applicability in practical real world use cases.
What is already clear, though, is that publicly shared foundation models delivered as a service (such as those hosted by OpenAI) will only be suitable for a subset of enterprise use cases. For many, enterprises will use fine-tuned, specialised domain-specific models that are made available directly to them on a private (or controlled) basis.
The current state-of-the-art in generative AI yields systems that are prone to accuracy problems, difficult to control and predict, and expensive to run. All of these issues need to be worked on.
Where Are the Implications for the Software Landscape?
Every software vendor that IDC is speaking to is updating or recreating their product roadmaps to incorporate their respective Generative AI strategies. Obviously, this will play out differently across infrastructure, platforms and applications – however there are certain common questions that are being asked:
- Should we develop our own large language models, or should partner with model providers like OpenAI, Anthropic, Cohere and AI21 and tune them for our software capabilities?
- How should we price our new Generative AI features?
- Should we include getting access to customer data to train models as part of a new set of licensing terms and conditions. What do we offer in return (if anything)?
- Do we need to evolve our support models to include service level agreements (SLAs) on accuracy on certain use cases that are being delivered?
Across all these questions, what is clear is that margin protection will be a major question for software vendors over time – especially those with questionable pricing power. In addition, there will be increased requirements for additional levels of support to deal with model, context and data drift. For the application players, there is an increasing likelihood that forms-based computing as a basis for applications will likely disappear over time and certain markets – for example, salesforce automation and human capital management could potentially be redrawn in the medium-term.
As part of these changes, what is becoming clear is that the application vendors that are cloud laggards will be AI laggards, and that platforms will continue to dominate the software landscape.
More importantly, incorporating trusted and responsible AI principles into both product development and customer engagement will move from being a differentiator in the short term to table stakes in the medium term.
What Are the Implications for Developers?
There’s been a significant amount of excitement about the ability of generative AI services (such as GitHub CoPilot, Replit Ghostwriter and Warp AI) to generate code, documentation, test scripts, and more.
Today’s state-of-the-art models are not going to put developers out of work. Rather, for some specific types of development work, and for some particular types of software asset being created, generative AI services are very likely to help developers accelerate their efforts to deliver working software, acting side-by-side with human developers in a “CoPilot” arrangement.
But it’s important to keep things in perspective: when we zoom out to consider the broader software delivery lifecycle, pro-innovation developers happy to experiment with new tools tend to bump into deployment, operations and support professionals who are much more risk averse.
What Are the Implications for Services Providers?
Lastly, many of the investment teams we spoke to were very interested in discussing how professional services (particularly IT services) firms might be impacted by generative AI. Will it bring them major new opportunities? Or will its ability to drive automation of knowledge work mean that it forces providers to cannibalise their own businesses?
Our early research shows that more than 65% of early adopters of generative AI capabilities agree or strongly agree that their need for external services providers will be reduced in the future
The potential impact of generative AI on project delivery is, in some ways, analogous to the potential impact of low- and no-code development tools; if providers can embrace these tools effectively and also deliver trusted solutions to clients, they may find fewer hours are required to deliver projects – but outcomes will be improved for everyone.
Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures
The arrival of Generative AI technologies has created what we believe to be a seminal moment for the industry: it will be so impactful that it will influence everything that comes after it. However, we believe it is just the starting point. We think that Generative AI will trigger a transition to AI Everywhere – moving us from the use of narrow AI for specific use cases to widening AI for a range of use cases simultaneously.
This means that it will impact every element of the technology stack, and also drive a rethink of all horizontal and vertical use cases. However, given the questions around risk and governance, it will also require every organization to develop and incorporate an AI ethics & governance framework to deal with the risks mentioned earlier.
The investors that we spoke to in London agreed that the tech industry needs to take balanced approach to commercializing the opportunity, while also ensure that policies and regulations continue to protect consumers, enterprises and society as a whole.
Generative AI in Healthcare: Benefits and Risks
Exploring the Weaknesses and Strengths of an Innovative Technology
As IT healthcare analysts we are biased towards excitement for generative AI, but also cautious in its integration in the business at all costs, especially when we refer to healthcare organisations. It’s impossible not to be impressed, excited and terrified when you’re shown the latest technology.
Researchers use it to investigate genes and DNA to identify patterns and make predictions regarding disease progression in nanoseconds, instead of normally wasting human years. A first generation of generative AI has already been considered to facilitate and automatise many clinical processes: an effective case is the personalisation of care plans.
For example, generative AI algorithms can be used to refine and further personalise engagement with patients directing them to the right resources across multiple clinical systems, improving their experience and optimising their pathways.
Nevertheless, what is still missing is to understand whether, when and how healthcare organisations really need generative AI and when the decision is out; they need to define how to govern it and its risks.
Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures
The Potential Risks of Generative AI in the Healthcare Industry — Regulations will Be Needed
Governments, public authorities, industry experts, academia should have deep discussions to develop policy frameworks that both regulate potential harms and unlock benefits. They should access a collective debate and forge a collective path forward.
As already seen for AI technologies, also for generative AI, without the right rules and protections, this is going to get seriously out of hand, and quickly. And for the healthcare market, these words resonate more and more for several reasons:
- First, regulation plays a key role when generative AI is touching sensitive medical data and its intersection with the benefit for the healthcare community and us all. A simple example would be the use of personal medical data to conduct drug discovery and clinical trials.
Is it “right” to share our personal healthcare data with healthcare professional scientists to get innovative care treatments and drug discovery for the entire population? While this issue of protecting sensitive patient data from being disclosed without the patient’s consent has already been raised with the adoption of AI-based applications. In the case of generative AI, it’s even more difficult to manage.
For instance, patients’ consent can’t be easily exercised in the case of an unlearning process. Removing selected data points from a model might affect the performance of the model itself.
- Second, the risks of abuse are extensive because the accuracy of the responses from these generative AI tools largely depends upon the data used to train them. Without a real and human understanding of the healthcare topic under the analysis, these models create and predict what’s statistically likely or looks good, but not necessarily true.
This will cause reasonable concerns for their use in clinical practice, which necessarily needs immediate regulation.
- Third, the IT infrastructure underpinning generative AI requires huge investments from healthcare organisations. To perform efficiently and effectively, these large language models need continuous training on real-world health data. But this requires major investment in clusters of compute, storage, networking, and systems infrastructure software.
Furthermore, resources are needed to manage, optimise, scale, and secure the entire infrastructure and associated applications to prevent privacy breaches and ensure business continuity.
Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures
The Potential Benefits for the Healthcare Industry Are Significant
Despite the concerns surrounding generative AI, its potential benefits for the healthcare industry cannot be overlooked. By harnessing this technology, the healthcare sector can:
- Improve workforce experience:
- Streamlining clinical documentation, generating patients’ histories, referrals, etc. suggest order entry.
- Helping to explain to patients their medical conditions in simpler terms and in an empathetic way.
- Analysing patient data, identifying patterns and make predictions regarding disease progression, treatment response, and suggesting treatment plans.
- Improve quality care:
- Improving patient experience by answering basic questions, explaining medical terms, scheduling appointments, directing them to appropriate resources.
- Helping to collect more accurate health data from different sources (wearables, conversations, EHR) to support personalised health recommendations.
- Enriching digital therapeutics solutions capabilities, expanding the capabilities of remote care and treatment.
Generative AI holds immense promise for healthcare, but we must strike the right balance between innovation and safeguarding patient interests. Collaborative efforts involving governments, providers, industry experts, and academia, are crucial to develop policy frameworks that address concerns, ensure data privacy, validate accuracy, and optimise the integration of Generative AI in healthcare.
Are you more worried or more excited about generative AI? Please share your thoughts with us, and in the meantime, we invite you to read our latest research on the topic.
If you are interested in knowing more about IDC Health Insights’ upcoming research, please contact Silvia Piai or Adriana Allocato.
GenAI in an Industrial Environment — Recommendations for Early Adopters
The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.
GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.
We observe three ways that industrial users can get in touch with GenAI:
- Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
- Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
- Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.
Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures
Be Ready — But Careful
In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.
Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.
AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.
AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.
Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.
Output Challenges
Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.
GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.
GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.
GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.
Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.
If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.
We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.
Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.
Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.
Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures
Evolving Challenges
Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.
Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.
For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.
Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.
The Sky is the Limit — For Now
In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.
To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.
Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.
Beware the Competition!
GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.
But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.
At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.
As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.
Apple Vision Pro Headset: A Step Towards a True Mixed Reality
Several years since the introduction of watchOS in 2014, Apple is once again setting its sights on revolutionising a technology that has yet to fulfil its potential. While augmented reality (AR) and virtual reality (VR) are not new, they have been subject to the unpredictable nature of product launches, with numerous companies transitioning from pioneers to underachievers in double quick time.
Nearly 350 AR and VR headsets have been launched in the past 10 years. Each brand has presented its own vision of AR and VR, only to fall short of lofty expectations. How many times have we eagerly embraced a new device, anticipating its transformative impact on our lives, only to be swiftly let down again and again?
Why will it be different this time? And why is this announcement so important?
The Revolution of Technology
The significance of this announcement lies in the anticipation surrounding tech companies’ efforts to revolutionise the next generation of user interfaces.
Throughout much of the latter half of the 20th century, keyboards were the primary means of interacting with digital content. But we have since witnessed the rise and widespread adoption of the mouse, touch interfaces, multitouch, voice control and voice assistants, with Apple playing a leading role in advancing some of these. Over the years, various organisations have explored immersive technologies and in the past decade VR and AR have become accessible to both consumers and businesses.
No single consumer electronics brand has managed to truly transform our interaction with digital content, however. This is what Apple aims to achieve with the Vision Pro — and it has started with a bang.
Why Vision Pro Is a Game-Changer
I was lucky enough to experience the Vision Pro hands-on. This is a product that truly lives up to the expectations set out in the keynote. Every aspect of the device is extraordinary: the image quality, the eye tracking and hand gestures, the immersive 3D spatial photos and content, the FaceTime conversations with 3D holograms, the way it blends the virtual with the real world through EyeSight, the user-friendly interface, and the luxurious feel of a meticulously crafted device.
With the Vision Pro Apple has revolutionised AR and VR experiences with a device that surpasses any other headset I’ve ever tested. This ground-breaking product has propelled the world of augmented and virtual reality to a completely different level.
Over the past decade, the collective expenditure on VR and AR headsets has exceeded $21 billion, while the number of headsets shipped has reached 59 million. The market is poised for even greater expansion, thanks to Apple’s entrance, which is expected to ignite widespread adoption and compel competitors to enter the segment.
We forecast that combined shipments of AR, VR and mixed-reality (MR) devices will skyrocket to 97 million units between 2023 and 2027, generating estimated revenue of $49 billion.
Vision Pro Potential in Business
While Apple emphasised its consumer-focused approach during the keynote, the company must expand its vision beyond just the consumer segment. Gaming has traditionally dominated the VR landscape, and this is likely to continue in the coming years. But there is an emerging potential for commercial applications as enterprises seek ways to minimise expenses and enhance customer satisfaction. By 2027, training, collaboration and improving customer experience will account for more than 52% of overall expenditure on MR hardware.
Similarly, AR has predominantly catered to enterprise users for troubleshooting, product development and design purposes. But there is also a rising consumer market opportunity for personal productivity and entertainment.
To realise this potential, Apple will need to mobilise its extensive developer community. Given the large community of developers, the company is well positioned to drive content creation through its developer base, which will be pivotal in reaching a significantly broader customer base.
Vision Pro Is Expensive — But the Benefits Are Clear
The Vision Pro is not cheap, but focusing only on its cost overlooks the main benefit. The product is not designed to generate long lines outside stores on launch day.
Instead, it will be a platform for content creators to unlock their creativity and seize new opportunities. Just as the iPad empowered developers to leverage a larger screen for innovative applications, the Vision Pro delivers a flawless, intuitive and immersive experience to end users — critical for developers to focus on content opportunities and not on product glitches.
Developers want a device that enables them to offer premium and familiar experiences to users, while enterprises see the potential of MR in reducing costs across areas such as product development, training, industrial maintenance and emergency response. Embracing MR can also enhance collaboration and improve customer experiences.
Enterprises and developers need a high-quality device with exceptional specifications that empowers them to deliver outstanding experiences, all while minimising costs. The Vision Pro does just this.
For consumers, the Vision Pro offers innovative ways to engage with digital content. Although we can access content on various screen sizes, an exceptional experience often requires the optimal screen size. This often leads to compromising mobility to enhance the experience, as only smartphones, iPads and laptops offer truly mobile screens.
For instance, while movies can be enjoyed on smartphones, a larger screen in a theatre provides a significantly better viewing experience. In the workspace, working with multiple displays boosts productivity compared to relying on a single laptop screen. But users can’t be carrying multiple screens when they change locations. AR experiences can also be accessed via smartphones or tablets, but the ability to view content hands-free is a major enhancement to the overall experience.
For years, MR headsets have promised such features. The ability to individually access all desired displays for each specific experience is not a novel concept. But while other companies have made promises and only partially delivered on them, primarily in gaming and in limited commercial applications, Apple is now delivering what many players in the space acknowledge only it can deliver.
Three Improvements for Vision Pro?
Despite its disruptive nature, there is still room for improvement with the Vision Pro:
- After using it for 30 minutes, I found myself wondering whether I could comfortably wear the device for a few hours. It was heavier than I’d thought, though that’s understandable considering the advanced technology it incorporates.
- Another consideration is that the device essentially “glues” a screen to our eyes, so eye fatigue could be an issue. Users should be careful and look at ways to minimise discomfort during prolonged use.
- Personal interactions. While EyeSight is one of the headset’s standout features, enabling users to connect with others without having to remove the device, it does raise practical concerns. How many of us would truly engage in conversations by displaying a digital representation of our eyes? This may require further evaluation to determine its real-world utility and acceptance.
In summary, Apple has been a disruptive force across multiple categories and industries, transforming personal computers, music players, smartphones and watches, to name a few. Its innovative products have not only set the standard for their respective categories, but have also revolutionised our lives in unimaginable ways.
With the introduction of the Vision Pro, Apple is initiating the next revolution in personal technology.
Please reach out if you have any questions, or follow me on Twitter or LinkedIn.
