Back to BlogJanuary 8, 2025

What makes a good AI support agent?

Even as AI improves, how do we know when an AI is as good as a human? Sometimes a particularly smart bot will impress us with its answer to a question. Other times it'll kick us in the teeth with an incoherent or nonsensical response.

By OpenCall Team
alluring_fawn_25573_minimalistic_photo_white_background_showing_2c3bb82e-f1c7-4fd2-b646-92b22045920e.jpg

AI keeps getting better and better at simulating conversations.

But even as AI improves, how do we know when an AI is as good as a human? Sometimes a particularly smart bot will impress us with its answer to a question. Other times it'll kick us in the teeth with an incoherent or nonsensical response.

Debating whether AI is as good as human customer service is beside the point. There are millions of interactions where humans would just prefer to route to an IVR anyway. Our focus at OpenCall is judging AI agents by their ability to manage entire end-to-end conversations, not just one brilliant or stilted interaction.

But understanding an AI's weaknesses and retreat patterns is as important as celebrating its strengths. This is where our customers come in as extra data points. When we deploy a new agent, we track not just the overall automation rate but the breakdown of outliers. What do customers most often call about where the AI fails? What edge cases are required and why didn't we anticipate them? This is how we help our customers by germane insights into client behavior

This sleuthing gives us invaluable feedback for revising the assistive logic to handle situations that stump it. It also reveals human biases baked into our own minds, assumptions we unwittingly pass to machines - waiting room rules that will someday be made obsolete, or geographic restrictions the AI uncovers logical flaws in. Putting humans and AI in helpful dialogue with each other has huge cognitive upside.

Which brings me to the second crucial piece: letting humans guide customer conversations as much as machines. If a customer assistance AI can't be state-of-the-art, that's okay. The key is being human-understandable and preventing customer frustration.

We find that customers do a great job scrubbing AI biases out of the logic branches that get exercised 90% of the time, while the AI does what AI does best: wheedling out obscure corner cases that slip by human expectations. Human oversight ensures the bot doesn't devolve into a full-blown parody of helpfulness.

Ultimately we judge AI success on the customers' terms: did it help, and did they have a good time? Not: did it correctly recognize all the words I spoke?

Thanks for reading!

More Articles