Skip to content

Overcoming current generative AI challenges

Mick McNeil, Group CRO
October 07, 2024

Generative AI is at a critical juncture in its development. Since the launch of ChatGPT in 2022, the valuations of major AI firms and chip manufacturers have skyrocketed. But some are voicing concerns regarding the profitability and adoption of generative AI solutions.

In this climate, success or failure is contingent on organizations’ ability to overcome the current barriers to generative AI adoption. This is true for both AI businesses and organizations adopting AI applications to enhance their operations. One of these problems is technical, while the other comes down to resources. They are both, to some degree, solvable.

In the following sections, we break down the biggest challenges facing generative AI today. And, drawing from the Gartner Early Lessons in Building LLM-Based Generative AI Solutions report , we propose viable solutions.

 

Hallucinating LLMs

The tendency of large language models (LLMs) to “hallucinate” is a barrier to their adoption. LLMs are capable of generating valuable content. But they sometimes produce factual errors and nonsensical output.

These issues have received considerable attention. “Hallucinate” was even chosen as Dictionary.com’s Word of the Year for 2023. This can make some businesses reluctant to deploy GenAI solutions, particularly if they are client-facing. They don’t want to risk negative user experiences.

Overcoming this technical challenge entirely isn’t straightforward. But there are ways to dramatically reduce the error rate of LLM algorithms. This comes down to the accuracy of data. “Garbage in, garbage out” applies here – poor quality data can hamper an LLM output. The solution is to provide models with accurate data at the right granularity.

Connecting LLMs with verifiable data sources (a process called “grounding”) can improve accuracy and reliability. Model responses are anchored to information that you know is valid. Similarly, retrieval-augmented generation (RAG) can be used to optimize LLM output by drawing from an authoritative knowledge base to inform its response. This is particularly useful with technical prompts that don’t have much associated data to draw from.

Ultimately, reducing AI hallucinations comes down using relevant data and effective architecture. As necessary, models need to draw from trusted sources and/or knowledge bases to inform and fine-tune their responses. This will help build user trust in LLM content generation, driving adoption.

 

Lack of sustainable compute 

It’s no secret that generative AI processes are compute intensive. Sourcing information via an LLM is estimated to require between 100 and 1,000 times the compute of conventional search engines, for example. These models require the very latest chips, housed in data centers that are optimized for efficiency. But however efficient computing infrastructure is, the scale of AI processes brings with it energy demands.

GenAI compute needs to be powered by renewables. But this is not always the case. For example, many data centers run off the US grid. That means their energy is derived from around 38% petroleum, 36% natural gas and just 9% renewables. To address valid concerns around sustainability, the industry needs to do better.

Businesses that invest in AI tools and consumers that use them don’t want it to come at the expense of the planet. The solution is to deliver cloud compute that runs solely on renewables like hydro and solar. That’s the approach Northern Data Group took to engineering the European data centers for our generative AI cloud platform. Green infrastructure is vital to achieving a robust, ethical generative AI ecosystem for the long term.

 

Go deeper

For many businesses, incorporating generative AI tools into their operations in a way that is reliable and sustainable is proving to be more challenging than anticipated. From problems with “hallucinations” to the energy demands of AI models, there are challenges facing generative AI deployment and adoption.

But as our understanding evolves, so too do the solutions. And there are considerable opportunities for businesses that can successfully integrate AI models into their operations. As the industry continues to mature, the benefits will compound.

You can learn about hallucinations, grounding, RAG and more in the full Gartner Early Lessons in Building LLM-Based Generative AI Solutions report. And look out for our next blog in this series, which explores how to prepare your workforce for working with generative AI solutions.

Download the report