No Brainer AI Podcast

NB30 – Addressing the AI Hallucination Problem

Special guest Eyelevel.ai CEO Neil Katz joins Geoff and Greg to discuss generative AI’s hallucination problem. Large language models’ (LLMs’) propensity to hallucinate in their answers represents one of the biggest barriers to enterprise adoption. Eyelevel boasts a 95% accuracy on private instance responses by using its APIs and tools to prepare proprietary data for LLM consumption.

The trio dives into the LLM marketplace, including discussions about why brands choose to implement a private instance, how the LLM market has evolved, and what causes the hallucination problem. Then, they discuss the enterprise data problem and how retrieval augmented generation (RAG) techniques still need additional help to strengthen LLM responses.

Chapters:

  • 0:00 Start
  • 4:40 Private instance versus licensing enterprise editions of LLMs
  • 7:32 Eyelevel’s Air France implementation achieving 95% success rates
  • 12:29: The need for enterprise data preparation
  • 18:14 The hallucination problem with LLMs and RAG approaches
  • 28:23 How governance can or cannot help enterprises
  • 32:32 Why some use open source versus proprietary LLMs
  • 39:12 The future of AI and an incredible vision
 

Learn more about Eyelevel at https://www.eyelevel.ai/ or contact Neil Katz via LinkedIn at https://www.linkedin.com/in/neilkatz.

SHARE THIS EPISODE:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on email
Email
Geoff Livingston - AI Expert, Marketing Pioneer, Evalueserve

Connect with Geoff on LinkedIn and get more of his AI insights on Medium.

Greg Verdino - AI Expert, Marketing Agency Owner, Futurist Keynote Speaker

Find Greg at his website, on LinkedIn and Twitter.

Join us every two weeks as we look at the latest news, trends, and hot topics in the world of AI, and put them all into perspective for marketing leaders.

Or grab your phone and subscribe in your favorite podcast app.