knowledge & insights

AI Hallucinations, RAG & Software Licensing Risks: Part 2

If Artificial Intelligence (AI) hallucinations are the potholes in the road race to adopt AI in your Enterprise Resource Planning (ERP) system, then Retrieval-Augmented Generation (RAG) might be the steering upgrade needed to avoid crashing, or at the very least, veering off track. Combining the predictive power of Large Language Models (LLMs) with verified data, RAG helps in grounding AI to the facts. In Part 2 of our 2-part blog series on AI hallucinations, we explain what RAG is, how it can help, and how it will still require scrupulous management to ensure your company stays on course with a smooth ERP ride.

Understanding Retrieval-Augmented Generation (RAG)

To understand RAG, we first need to define LLMs. Traditional LLMs, like ChatGPT, come up with answers based on patterns learned during training. The problem with this (as we discussed in Part 1 of our blog series) is that when an LLM is faced with incomplete data, it might wing it and end up outputting hallucinations rather than facts.

RAG can be likened to giving AI a reliable map. If you wanted to make a map of the United States, for example, you would not want or need data from European countries included in the map-making. It would be irrelevant information. Similarly, RAG focuses only on the relevant information provided by a company, placing guardrails on what information can be accessed and allowing AI to retrieve facts from that company’s own trusted data sources.

How might this play out in real life? An RAG-enabled system analyzing licensing agreements, for example, could look up contracts and clauses in-house and dramatically reduce the risk of false or misleading information. And that equates to a better bottom line. Because for companies managing complex ERP environments, one misinterpreted or misconstrued licensing agreement could create a mayhem of compliance issues.

However, RAG is not a panacea. Like any mapping system, RAG is only as reliable as the information it pulls from. If the source data is outdated or incomplete, for example, the best steering in the world will not lead you to the correct destination.

Guardrails Matter in the Age of AI

Even with the utilization of RAG, AI still needs to be managed and kept in check. For one thing, hallucinations are still a possibility. TechTarget shared a comprehensive list of RAG limitations here, naming eight problems ranging from the issue of missing or irrelevant data to the need for performance monitoring. There can be data privacy concerns, as well, and the potential for copyright violations, as reported by Forbes:

First, Dow Jones and NYP Holdings is arguing that there’s a violation in Perplexity putting massive amounts of news into its RAG repository, the database that it uses to feed the LLM.

Next, the plaintiffs are claiming that some of the LLM’s results “contain full or partial verbatim reproductions of Plaintiffs’ copyrighted articles.” There’s anecdotal evidence of users getting Perplexity to spit out whole NYP articles verbatim, which doesn’t look good for the defendant.

The third argument is also laid out well in the filing docs, where plaintiffs allege that the LLM is “generat[ing] made-up text (hallucinations) in its outputs and attribut[ing] that text to Plaintiffs’ publications using Plaintiffs’ trademarks.”

In other words, the LLM is essentially putting words in the mouths of the plaintiffs, which sounds like potential defamation. The court document stipulates:

“Plaintiffs also provide photo examples of hallucinations generated by Perplexity, in which Perplexity ‘fabricated’ information not actually contained in the New York Post and Wall Street Journal articles that Perplexity cited. Plaintiffs claim that hallucinations such as these are ‘likely to cause confusion or mistake’ for consumers.”

Companies need to be able to answer the following questions:

  • Which company data is available for AI tools to access?
  • What verification guardrails are in place?
  • Who in your company will sign off on AI-assisted findings?

RAG systems, with proper governance, can help prevent AI hallucinations from disrupting things, but a human must always remain behind the wheel. As always, let us know how we can help in the process!

Published on November 19, 2025

Software licensors are known for vague contracts—they’ve made a business of it. 

Read the latest industry news.

Recommended Reading

Oracle Java: An Update
As we approach the 3-year mark since Oracle made major changes to their Oracle Java licensing policy, we share our insights on how Oracle Java licensing changes are impacting organizations across the globe.