Skip to content

Many Businesses Don’t Trust AI and Poor Data Might Be to Blame

June 23, 2025
Businesses Don’t Trust AI and Poor Data Might Be to Blame

A new study reveals a glaring gap in corporate AI adoption: 42% of companies don’t trust their AI models’ outputs, yet only 58% have proper data observability systems in place. The findings, from data management firm Ataccama, suggest that unreliable AI performance may stem from unstructured, unmonitored data—not flawed algorithms.

The Trust Problem

Many businesses treat data oversight as an afterthought, applying observability tools reactively rather than proactively. This leads to fragmented governance, with siloed or low-quality data undermining AI reliability. As Ataccama CPO Jay Limburn notes, companies have “invested in tools but haven’t operationalized trust.” Effective programs require automated checks embedded across the entire data pipeline—from ingestion to AI deployment—to catch issues before they distort results.

Unstructured Data: The Hidden Hurdle

Traditional monitoring tools struggle with PDFs, images, and other unstructured data, which now account for a growing share of inputs due to generative AI and RAG adoption. Yet fewer than 30% of organizations currently feed such data into models, leaving critical gaps in training and output accuracy.

What’s Holding Companies Back?

  • Skills shortages in data governance
  • Budget constraints for advanced observability
  • Legacy systems ill-equipped for modern AI demands

The Bottom Line

Without clean, well-managed data, AI will remain a gamble for many firms. As Limburn puts it: “You can’t trust AI if you don’t trust your data first.” Companies that prioritize end-to-end data observability will likely pull ahead—while those that don’t risk costly missteps.

For actionable insights, Ataccama’s full report dives into benchmarks for building AI-ready data pipelines.