Table of Contents

I’m a fully signed up member to the principle that human contact is best reserved for customer service scenarios which are emotive, complex, urgent or require relationship repair. That rule of thumb defines the moat against the incoming tide of AI capability.
It’s been a useful framework for customer contact strategy. Customer conversations can be easily tagged in this way. Levels of demand can be forecast as can the mix of skills needed.
But there are exceptions. The one I’m about to share is fascinating and helps advance our understanding of effective customer contact options.
Remember Woebot?
I remember incorporating this bot into my self service/conversational AI presentations back in the day. Woebot, the mental health chatbot, launched on June 6, 2017. It began as a service on Facebook Messenger and was created to provide conversational, cognitive behavioural therapy–based support to users.
I liked it because it was different from the standard money or travel bot and suggested much broader use cases for this new conversational paradigm that was just getting going.
This was in pre-COVID days, and as we know, discussion around mental health and well-being has come out into the open ever since. Back then it felt novel.
Woebot was an early experiment in user preferences. And challenges the rule of thumb I opened this article with. It seems in some circumstances customers clearly prefer a non-human touch.
That’s what really caught my eye about this form of service delivery: non-human while still being conversational.
However, the rapid advancement of large language models made Woebot’s pre-scripted approach less competitive, and the app was officially retired on June 30, 2025.
These days, using your favourite LLM as therapist is a widespread habit especially amongst GenZ – 40% of whom are inclined to think that AI is now sentient. Maybe because it’s so convincing in the therapeutic wisdom it imparts.
That said, experts remain concerned.
In fact, the habit has grown to such an extent that Illinois has just become the first US state to ban AI therapists. The law highlights that only licensed professionals are allowed to offer counseling services in the state and forbids AI chatbots or tools from acting as a stand-alone therapist.
However we choose to respond as societies, the fact remains that users like this form of interaction.
When Vulnerable Customers Prefer The Non-Human Touch
This insight into our psychology has been recently rediscovered in the fast-growing space of Voice AI deployments. It was something I noticed in a Reddit post. I’m going to quote it verbatim since the story is perfectly told for the insights I want to leave you with about contact strategy. BTW the use of bold text is my own to add emphasis.
“I want to share an experience that has completely shifted my perspective on AI in customer interactions, especially around sensitive conversations. For the past six months, I’ve been analyzing the use of Voice AI in debt collection, working directly with MagicTeams.ai’s suite of Voice AI tools.
Like most people, I originally assumed debt collection was simply too personal and delicate for AI to handle well. It’s a domain full of emotion and, most of all, shame. How could we expect AI to handle those conversations with “the right touch”?
But after digging into thousands of call transcripts, and interviewing both collection agents and customers, what I found genuinely surprised me: Many people actually prefer talking to AI about their financial challenges, far more than to a human agent.
Why? The answer stunned me: shame. Debt collection is loaded with stigma. In my interviews, people repeatedly told me, “It’s just easier to talk about my struggles when I know there’s no judgment, no tone, no subtle cues.” People felt less embarrassed and, as a result, more open and honest with AI.
The data supported this shift in mindset:
- At a credit union I studied, customer satisfaction scores jumped 12 points higher for MagicTeams powered AI calls compared to human ones.
- Customer engagement soared by 70% during AI voice interactions.
- Customers not only answered calls more often, they stayed on the line longer and were more honest about their situations.
The real surprise: customers managed by AI-driven collections were significantly more likely to remain loyal afterward. The experience felt less adversarial – people didn’t feel judged, and were willing to continue the relationship.
A particularly powerful example: One bank we studied rolled out MagicTeams’ multilingual AI voice support, which could fluidly switch between languages. Non-native English speakers shared that this made them far more comfortable negotiating payment plans—and they felt less self-conscious discussing delicate topics in their preferred language.
Importantly, we’re not just stopping at conversation. We’re now building an end-to-end automated workflow for these Voice AI interactions using n8n, ensuring seamless handoffs, better follow-ups, and greater personalization – without any human bias or friction.
Key takeaways for me:
- Sometimes, the “human touch” isn’t what people want in vulnerable moments.
- People are more honest with AI because it offers a truly judgment-free space.
- The right automation (with MagicTeams and n8n) can actually deliver a more human experience than humans themselves.
I found this a fascinating case study validating what Woebot first recognised. I’ll leave you to consider the implications and use cases when vulnerable customers prefer the non-human touch. For instance, the author sees tremendous scope.
“This goes way beyond just debt collection—there are huge implications for all sensitive customer interactions.”
Do you agree? Or is it only the specific emotion of shame that triggers this preference?
I’ll leave you with a final insight from the post which for me sources a brilliant new rule of thumb for planning customer contact strategy.
Instead of asking “How can AI replace humans?” we should be asking “How can AI create spaces where humans feel safe being vulnerable?”

