What You Need to Know About AI in Risk and Compliance
I read it so you do not need to
Parker & Lawrence Research just published a comprehensive study on generative AI in risk and compliance, and buried in its 65 pages are some findings you really ought to know about. I’ve read through it so you don’t have to, and I want to highlight what matters.
Five Key Takeaways
Before we dive into the details, here’s what you need to remember from this research:
Most AI investments are failing spectacularly. Between 74% and 95% of organisations are seeing zero return on their generative AI spending. This isn’t because the technology doesn’t work. It’s because firms are deploying it on low-value tasks that don’t justify the cost.
Playing it safe is actually the risky strategy. Organisations have defaulted to low-risk applications like document summarisation and email drafting. These feel safe but deliver minimal value. The paradox is that avoiding risk in use case selection creates financial risk through poor returns.
Data quality is the biggest blocker. More than 45% of survey respondents cited data quality and availability as their primary barrier to AI adoption. You can’t build reliable AI on unreliable data. Provenance, lineage, and governance matter more than ever.
Elite risk management is what separates winners from losers. The organisations getting genuine returns from AI are those with mature risk frameworks. They embed risk expertise in technical teams, build transparency from the start, and use their governance capabilities to pursue higher-value applications confidently.
Risk and compliance has become a strategic function. These teams now determine how fast and far organisations can move with AI innovation. They’re not blockers holding back progress. They’re enablers making bold moves possible. That’s a fundamental shift in how we should think about these disciplines.
Keep reading with a 7-day free trial
Subscribe to Discarded.AI to keep reading this post and get 7 days of free access to the full post archives.


