In the race to integrate Artificial Intelligence (AI) into their business operations, CEOs are putting the pedal to the metal. Increasingly, AI is being inserted into the daily operations of companies and organizations, and the pressure on CEOs to incorporate it into their business plans has never been greater. A recent survey shows that most CEOs even fear losing their jobs over AI performance pressures. The frantic race to streamline business operations by utilizing AI makes it more important than ever for companies to maintain control over their software licensing compliance. While skillful steering may help CEOs avoid some of the hazards on the AI race track, there is one obstacle that is the equivalent to a brick wall: AI hallucinations, which are outputs that seem accurate but are, in fact, false or fabricated.
AI hallucinations are a product of AI models generating information that sounds perfectly believable but is, in fact, untrue, misleading, or completely fabricated. With AI hallucination cases making headlines around the world, CNET reported in September of this year:
“A New York lawyer used ChatGPT to draft a legal brief that cited nonexistent cases, leading to sanctions. Google had its fair share of mishaps, too. During its launch demo, Google's Bard (now called Gemini) once confidently answered a question about the James Webb Space Telescope with incorrect information, wiping billions off Alphabet's stock value in a single day.
“Google's second fiasco was Gemini's attempt to show racial diversity, which was part of an effort to correct for the AI bot's past issues of underrepresentation and stereotyping. The model overcompensated, generating historically inaccurate and offensive images, including one that depicted Black individuals as Nazis.
“And who can forget the notorious AI Overviews flop, when it suggested mixing non-toxic glue into pizza sauce to keep the cheese from sliding, or saying eating rocks is good because they are a vital source of minerals and vitamins?
“Fast forward to 2025, and similar blunders hit headlines, like ChatGPT advising someone to swap table salt with sodium bromide, landing him in the hospital with a toxic condition known as bromism. You'd expect advanced AI models to hallucinate less. However, as we will see in the more recent examples, we are far from a solution.”
Again, from CNET:
“Large language models don't know facts the way people do, and they don't intend to deceive you. Mike Miller, senior principal product leader for agentic AI at AWS, tells CNET that when data is incomplete, biased or outdated, the system fills in the blanks, sometimes creating information that never existed.
"Hallucinations are inherent to the way that these foundation models work because they're operating on predictions," Miller says. "They're sort of trying to match the statistical probability of their training data."
“Hallucinations can also stem from vague prompts, overconfidence in statistical guesses and gaps in training material. And because most models are trained to respond conversationally, they tend to give polished answers even when they're wrong in their aim to please you.”
One could assume that maybe the technology is simply too new, just needs some tweaking, and that eventually it will become accurate. But Computerworld shared news to the contrary in September of this year, as well:
“In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
“OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies. “Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.”
Many companies are experimenting with AI tools in managing their ERP environments, contracts, or licensing data. Software companies offering these services are quick to point out that managing software licensing compliance the ‘old way’ is archaic and too time-consuming.
But AI hallucinations could produce inaccurate summaries of contracts or misinterpret Oracle’s complex licensing rules. Even worse, a company relying solely on AI during an Oracle audit could hand over flawed data that strengthens Oracle’s claims rather than protecting the company.
Lastly, a company could end up either over-licensed or under-licensed if not careful. So, what should be done?
AI cannot supplant the judgment of people or legal expertise. For one thing, AI tools do not consider negotiation history or nuances in Oracle agreements. Without guardrails in place, companies risk turning over the rope which vendors will use to hang them. Therefore, companies need to view AI as a supplement – not a substitute – in managing licensing compliance.
Forbes recently reported on how to manage AI hallucinations in the workplace. Below, we compiled a list of tips for managing AI use based on Forbes’ advice:
Beyond the points enumerated above, there is one more tip that we will cover in Part 2: how the integration of Retrieval-Augmented Generation (“RAG”), the biggest business use-case of Language Learning Models (“LLMs”), can help.
* * *
Companies should not rely solely on AI in managing their software licensing compliance. While it can be a useful addition to a compliance ‘toolbox,’ it should never stand alone. There is no substitute for organizations validating their findings themselves along with the help of trusted legal counsel. Let us know if we can help!
Published on November 7, 2025
Software licensors are known for vague contracts—they’ve made a business of it.
Read the latest industry news.