← Back to Perspectives

The chatbot was impressive in demos. Natural language, quick responses, 24/7 availability. Then it gave a customer incorrect financial advice, and the compliance team shut it down.

AI in customer experience is not a technology problem. It is a risk management problem. Here is how to do it without the lawsuits — and without abandoning the opportunity.


Rule 1: Never let AI make irreversible decisions

AI can suggest. It can summarise. It can route. It cannot approve, commit, or finalise.

Every AI touchpoint needs a human checkpoint before anything changes: no account closures without human review, no credit decisions without oversight, no policy cancellations without confirmation.

This slows things down. It also keeps you regulated.


Rule 2: Design for when it fails, not when it works

Your AI will hallucinate. It will miss context. It will confidently give the wrong answer to the right question.

Your design must assume this happens weekly, not never. Build clear escalation paths. Implement automatic confidence scoring. Route to human agents when certainty drops below threshold.

Test your failure paths more thoroughly than your success paths. Most teams do the opposite.


Rule 3: Document everything the AI says

In regulated industries, "the AI said so" is not an audit trail. You need complete logs: what the customer asked, what the AI responded, what the human approved, and why.

This sounds obvious. Most implementations skip it to save time. Then the first complaint arrives, and no one can explain what happened.


Where AI works well in CX — and where it doesn't

Safe territory: intent classification and routing, summarising previous interactions for agents, drafting responses for human approval, answering FAQs with verified, unchanging content.

Avoid: anything requiring interpretation of policy, financial or medical advice, complaint handling, anything with legal consequences.

Most vendors demo the first category and imply they can handle the second. They cannot — not yet, and not safely.


The corridor worth finding

We do not help clients deploy AI everywhere. We help them find the narrow corridor where AI genuinely improves customer experience without creating unacceptable regulatory or reputational risk.

Usually that corridor is about 20% of what the vendor promised. But that 20% actually ships, actually works, and actually survives the first audit. That is a much better return than 100% ambition and a compliance shutdown.