A few days ago, I had to contact support of one of those “AI-first” tech companies over a subscription issue. What stayed with me was the experience of being routed first through an AI support layer that did a very good job of making me feel like I was not really talking to a company that wanted to help me. Frustrated and angry, two hours later I eventually reached a human specialist, and the issue was solved in seconds. But by then, the service experience had already broken.
That is the risk I want to talk about here.
AI in customer experience is often discussed as a question of efficiency. Faster response times, lower cost, better availability, fewer repetitive tasks for support teams, – those benefits are real, and the evidence base supports them. But the same evidence shows a consistent pattern: when AI becomes the visible front line of service rather than a bounded support layer, customer frustration rises, trust falls, warmth disappears, and the brand can start to feel less like a service organization and more like a gatekeeper.
McKinsey’s 2025 State of AI survey says organizations are actively rewiring workflows and expanding governance around generative AI, while inaccuracy is the most commonly reported AI-related risk. Deloitte’s 2025 consumer research shows genAI has gone mainstream, but consumers still want control, trust, and clarity about how these systems are used. Forrester’s 2026 predictions suggest the near-term upside in customer service will mostly come from better simple self-service, not from some dramatic, universal transformation.
The real question is not whether AI can sit somewhere in the customer journey, but rather where it should sit, what it should be allowed to do, and what kind of experience it creates when it stands between the customer and the company, or rather represents the company for the customer.
AI is useful when the task is narrow
AI works best when the interaction is narrow, transactional, and low-stakes. In those cases, speed matters more than emotional nuance. A customer checking order status, resetting a password, or looking for a simple FAQ answer may be perfectly happy with an AI front end, especially if the answer is fast, accurate, and easy to verify. Reports show that customers do not reject AI in principle. They reject AI when it gets in the way of resolution, or when it is used as a substitute for care rather than as a support layer.
That is the first important design principle: AI should be selective, not universal.
Too many organizations treat AI as a branding layer on top of the whole service model. That is where things go wrong. The moment the AI becomes the first and main voice of the company, every failure is the brand speaking.
The most common failure modes are predictable
Research points to several repeating patterns. Customers often experience AI front ends as frustrating when the system cannot understand intent, loops through scripted responses, or blocks access to a human agent. The result is higher effort, more repetition, and a feeling that the customer is doing unpaid labor just to get basic help. In more relational or emotionally loaded situations, research finds that human agents outperform AI on warmth, trust, and retention.
That distinction matters. AI can be technically correct and still feel wrong. Anthropomorphized bots can backfire when expectations rise faster than capability, and that customers in sensitive contexts may leave interactions feeling worse rather than better if the system is repetitive, mechanical, or tone-deaf.
A second failure mode is hallucination and policy misstatement. McKinsey’s 2025 survey identifies inaccuracy as the AI risk organizations most often experience. In customer service it becomes a customer-facing statement, a policy answer, or a promise about what the company will do. If that answer is wrong, the brand owns the error.
A third failure mode is transparency. Deloitte’s consumer research suggests people are increasingly open to genAI in daily life, but they still want control and clear boundaries. Undisclosed AI can feel like a deliberate attempt to hide the company behind automation. Once that feeling appears, trust becomes harder to recover.
A fourth failure mode is privacy and security. Multiple reports point to growing customer concern about data use, model behavior, and security incidents. That is especially relevant in service contexts where customers disclose personal, financial, or emotionally sensitive information. If AI becomes the visible interface, the organization also needs to prove that it can handle that information responsibly.
There is one more failure mode that is often overlooked: the internal one. AI can distort the work of the service team itself. There are many cases where agents spend extra time validating AI output, which can slow responses and create a new kind of operational drag. That means the company may get a cleaner-looking front end while the real work underneath becomes more complicated.
The hidden risk is false containment
This is the part that matters most from a CX and UX perspective.
AI front ends often create a false sense of containment. From the organization’s point of view, the issue is “handled” because the customer did not reach a human. From the customer’s point of view, the issue is not handled at all. It has only been delayed, rerouted, or made more expensive in terms of time and energy.
That is why AI in customer experience cannot be evaluated on containment alone. The relevant question is not how many contacts the bot absorbed. The relevant questions are: Did the customer get what they needed? Did the company reduce effort? Did the experience build trust or weaken it? Did the interaction feel like service, or like obstruction?
The difference is strategic.
What AI should do instead
The strongest argument for AI in customer experience is augmentation.
Used well, AI can help service teams summarize conversations, surface relevant knowledge, draft replies, classify requests, and route issues faster. It can improve the first layer of service without pretending to be the whole service. Research suggests that hybrid designs, where AI handles routine tasks and humans retain control over complex or emotional cases, are the safer and more effective pattern.
That is also where the current market seems to be heading. McKinsey says companies are expanding governance and training as part of broader AI rewiring. Forrester expects some meaningful improvement in simple self-service but emphasizes that the real work is still operational and foundational. In other words, the future is not AI replacing service. It is service being redesigned around AI with much more attention to controls, boundaries, and quality.
A practical mitigation model
If AI is going to front customer experience, a few things have to be true.
First, the scope has to be narrow enough for the system to be reliable. Low-complexity and low-emotional-stakes tasks are the natural fit. Anything ambiguous, high-stakes, or relational should have a fast path to a human. Customers value human warmth and nuanced judgment when the interaction matters.
Second, the escalation path has to be obvious, not dependent on the customer learning how to outsmart the bot. If AI is a front end, it must behave like a front door, not a wall.
Third, context must survive handoff. One of the most irritating things about AI-mediated service is having to repeat everything several times trying to create a better prompt and then later to have to start from the beginning once a human finally enters the conversation.
Fourth, the system must be transparent about what it is and what it is not. Over-humanizing a bot without matching the human promise is a good way to trigger disappointment when things go wrong.
Fifth, governance matters. This includes knowledge base quality, human review for risky outputs, clear rules for escalation, and continuous monitoring for inaccuracy, bias, and security issues. McKinsey’s current research shows companies know these risks are real; the gap is in how consistently they mitigate them.
And finally, the internal service culture has to remain human-centered. Employees should not be treated as the backup that is called in only after the AI has exhausted the customer. If AI is introduced as a way to make people smaller, customers will feel that shrinkage in the experience. If it is introduced as a way to let people do higher-value work where empathy and judgment matter, the experience can actually improve. Service culture always leaks into customer experience.
The design question
From a service design perspective, AI is a touchpoint technology. A powerful and useful one, but still only one layer in a larger experience system. This is why I do not think the central question is whether AI should front customer experience.
It should be this:
What kind of customer experience are we creating when AI becomes the first voice our customers hear?
If that voice feels like care, clarity, and fast access to help, AI can strengthen the brand. If it feels like cost-cutting, friction, and distance, it will weaken the brand faster than most leaders expect.
Customers do not remember architectures, they remember friction, relief, confusion, and care. They remember whether the path forward was visible, whether the system respected their time and whether they felt like a person or like a case number.
And if the first touchpoint feels like a barrier instead of support, the journey has already begun to break. So has the relationship with the brand.
That is the real lesson from my recent support experience, and it is the lesson the research I conducted and the reports I read keep repeating in different forms. AI can improve service or it can also hollow it out. The difference is the design.
Links:
- McKinsey: The state of AI in 2025: Agents, innovation, and transformation
- Deloitte: In the gen AI economy, consumers want innovation they can trust
- Forrester: Predictions 2026: AI Gets Real For Customer Service — But It’s Not Glamorous Work
- Journal of Interactive Marketing: Artificial Intelligence Chatbots Versus Human Agents in Customer Satisfaction: The Role of Warmth and Competence
- IBM: Customer service and the generative AI advantage
- Edgetier: When Chatbots Go Wrong: The New Risk Landscape in AI Customer ServiceService robot risk awareness and customer-directed helping from the perspective of the transactional model of stress – ScienceDirect

