our News

Generative AI Hallucinations

Problem in GenAI

Sometimes Generative AI perceives patterns that are imperceptible to people leading to an inaccurate output. We are talking about GenAI hallucinations, just like humans seeing patterns in clouds in the sky.

Hallucinations are misinterpretations due to several factors including  over-fitting, bias or inaccuracy of the training data, as well as the complexity of the large language model (LLM) used.

Causes of hallucinations

Our AI team is working on reducing the occurrence of hallucinations by optimizing data pre-processing and refining model architectures.We know that Generative pre-trained transformer (GPT) still has limitations and many researchers are working to reduce hallucinations by addressing the causes including:

  • Social biases in the training data,
  • Conflicting indications,
  • Insufficient training data: model is not trained with a diverse and representative dataset, it may lack exposure to diverse scenarios and contexts,
  • Model complexity, complex models can produce unexpected results.
  • Over-fitting, model is tuned too tight,
  • Ambiguity in the training data, training data contains contradictory or ambiguous content,
  • Data Anomalies and Outliers, training data can impact the model behavior,
  • Model complexity: Very complex artificial intelligence models can sometimes produce unexpected results.
  • Non-Contextual Verification,LLMs do not possess the ability to verify information against external sources or access real-time data. 

European Ethical AI

The problems GenAI hallucinations cause are generating disinformation, providing ethical issues, and giving misleading content.

It is crucial to improve the quality of the data and algorithms and invest in research and development to make GenAI ethically sound and safe to use.

Semlab is working in close collaboration with other researchers and developers within the European industry and academia. Semlab joined R&D efforts to mitigate hallucination and maximize LLM societal benefits while minimizing bias and misleading content and working on an European Ethical AI. 

Comments are closed.