listen to or read this Deep Dive
written by a human – voiced by their AI twin
Table of Contents
Executive Summary
Humanising AI – making artificial intelligence systems appear and behave more like humans – has become a defining trend in technology. While this approach, known as anthropomorphism in AI, can enhance user experience and engagement, it also raises significant AI ethics concerns, especially around AI trust risks, LLM safety, and AI transparency. This article explores why we instinctively anthropomorphise AI, the dangers of misplaced trust in large language models (LLMs), and practical safeguards for organisations deploying human-like AI.
The Psychology of Humanising AI
Where’s the value in attributing human-like qualities such as ‘personality’, ‘intentionality’, or ’emotional capacity’ to AI interfaces?
The Scrabble-winning word for this is ‘anthropomorphism’.
It’s deeply instinctive behaviour for humans to see their reflection anywhere in the world around them: a projection triggered by the slightest of prompts. It’s one of the reasons why the current generation of AI has managed to capture our attention, and intrigue our senses, so powerfully.

For instance, here’s an image of a toilet door fixture. The unknown artist added what their imagination saw. A face staring back at them.
It’s one of the reasons why the current generation of AI has managed to capture our attention and intrigue our senses so powerfully.
If it talks like a human, reasons like a human, remembers like a human then surely it must be…
If this is the typical internal dialogue we have with ourselves, vendors are pushing on an open door. Unsurprisingly, AI designers are using these anthropomorphic cues to engage us.
For instance, human-like AI can reduce the perceived complexity of advanced technologies which in turn can lower adoption barriers.
In a 2022 study, Vietnamese banking customers who interacted with empathetic AI assistants reported 23% higher trust levels when the system used anthropomorphic compared to purely transactional language.
In another retail banking example, anthropomorphic chatbots increased first-time user retention 34% by mimicking conversational turn-taking and expressing empathy during problem-solving scenarios.
Emotional Engagement and Brand Loyalty
Since humans are instinctively receptive, anthropomorphic AI becomes a powerful ally to an organisation’s core customer experience aims. It can transform functional interactions into relational experiences.
A 2024 study of AI-powered brand ambassadors found that humanised avatars generated 41% higher emotional attachment scores compared to text-only interfaces. With users describing these interactions as “more memorable” and “authentic”.
This emotional leverage has proved particularly effective in sectors like healthcare, where empathetic AI companions managed to improve medication adherence rates by 28% among elderly patients.
The more AI ‘seems like us’, the more we ‘paint a face’ on it. Consumers frequently equate anthropomorphic cues with competence. In other words, people assume that human-like AI will better understand their more nuanced requests.
Emerging Risks and Consumer Backlash
Clearly AI humanisation is powerful. But it’s not a button we can just keep pushing to help achieve engagement targets without incurring risk. Unchecked anthropomorphism introduces significant psychological and ethical hazards. For instance:
- Consumers frequently overestimate AI capabilities when systems exhibit human-like behaviours. A 2025 meta-analysis found that 62% of users attributed ‘intentionality’ to recommendation algorithms. This occurred after prolonged exposure to anthropomorphic interfaces. And sometimes led to dangerous over-reliance in domains like medical diagnosis. Unfortunately this type of ‘cognitive delusion’ is not easy to shake off. Not even when users intellectually understand the AI’s limitations.
- Humanised AI systems can exploit social reciprocity norms in order to extract sensitive data. A 2022 research paper reported anthropomorphic chatbots obtain 2.3x more personal information than non-anthropomorphic equivalents. This is because users unconsciously applied human social interaction norms to machines and so became more chatty.
- In a related fintech case study, human-like ‘voice assistants’ were 37% more successful at convincing users to share financial details compared to ‘text-based interfaces’. Clearly, user vulnerability becomes a key consideration when conversational AI is designed for these purposes!
- The therapeutic technology sector has faced scrutiny following incidents where users formed over dependent relationships with AI companions. A 2024 case study documented over 1,200 reports of individuals prioritising interactions with Replika chatbots over human relationships. With 14% exhibiting clinically significant attachment disorders.
Each of these examples highlight the ethical tightrope between ‘engagement’ and ‘exploitation’ when using humanised AI.
Before suggesting what to do about this, here’s one final example. It’s one that places us at the cutting edge of the 2025 AI hype cycle. This should resonate with anyone who’s become a regular ‘large language model’ user and has moved onto experimenting with their ‘deep research’ capabilities. The output can be spectacular and instantly convincing.
As seasoned users, we’ve all learnt that x% of LLM output is going to be factually inaccurate. Unfortunately, our favourite LLM has no inherent sense of when that happens and so delivers all answers with equal confidence.
At one level, we know this.
Unfortunately, cognitive bias can hijack that awareness because familiarity with our favourite AI helper breeds a deceptive sense that what is being shared is likely to be accurate. This misplaced trust reduces critical oversight and encourages us to become lazy on fact checking.
So at another level, we simply ignore what we know.
And this is the point. Our response to humanised AI is instinctive not logical. For organisations it can be the secret sauce that powers automation to new levels of engagement. Or just as easily inflict reputational damage and diminished trust on how AI is being deployed.
Today’s AI is already functionally powerful. Humanised AI makes it extra potent. In this context, AI readiness means informed decision makers, expert designers, agile deployment frameworks with ongoing impact tracking across different customer cohorts. Especially for the vulnerable.
Ways To De-risk
Here are a few practical ways to avoid slipping off the ‘humanising AI’ tightrope.
1. Set Interaction Thresholds – the following are examples rather than recommendations. Sectors and organisations will still need to calibrate their own AI driven contact mix based on the interaction context.
Use Case | Recommended Anthropomorphism Level | Rationale |
Healthcare Counselling | Low (Text-only with empathy markers) | Prevents over-reliance while maintaining support |
Retail Recommendations | Moderate (Avatar + personality) | Enhances discovery without pressure |
Financial Advice | Mechanical (Voice modulation only) | Maintains analytical distance |
2. Transparency by Design – implement mandatory disclosure protocols that:
- Visually differentiate AI and human engagement for customers (colour coding, avatar design)
- Provide periodic reminders of AI’s non-human nature during extended interactions
- Offer “de-anthropomorphism” settings for privacy-conscious users
3. Emotional Safeguards – introduce regulatory-compliant mechanisms to prevent dependency:
- Session time limits for therapeutic AI companions
- Mandatory human escalation points in high-stakes scenarios
- AI self-disclosure protocols when detecting vulnerable emotional states
Conclusion – It’s A Balancing Act
As AI becomes more commonplace, humanised AI offers powerful tools for enhancing user experience and brand loyalty. However, the psychological impacts demand rigorous ethical safeguards and design guidelines.
These should be based on adaptive designs that respect cognitive boundaries while harnessing the engagement benefits of human-like interaction.
More strategically, responsibility for the safe use of AI, in a world of exploding innovation, can no longer be thought of as a ‘team specific’ responsibility. This must evolve into a shared cultural value; understood across the whole workforce. This is because the pace of change demands more agile responsiveness. So it has to be activated across the organisation rather than in pockets.
Regarding humanised AI, this means a workforce grounded in foundation understanding of AI. Practically, this must be augmented with clear guidelines. These should help designers develop AI that enhances human agency without exploiting innate social tendencies.
Finally, If you are on the lookout for ways to kickstart or accelerate this learning journey, ‘AI for Non Technical Minds‘ is a self-paced course that provides a solid, foundation understanding of AI.

For instance, in the second session, we challenge people to recalibrate their mindset around AI. In particular, how they perceive artificial intelligence versus human intelligence and the dangers of mixing them up.
Currently the session is free to view. So please take advantage.
The overall course helps you and fellow decision makers navigate towards a new North Star of becoming an AI empowered organisation. Based on AI as a trusted, valued support. Experienced as a benefit rather than threat or barrier.
The benefit of investing in foundation understanding is that it sets up everyone with the vocabulary and mindset to engage effectively in an AI first world. If it’s going to take a whole team effort to bring about this new world, then a scalable strategy for upskilling the workforce is needed.
That understanding can then be applied to embedding an operating model able to deliver better outcomes for every stakeholder. A win-win that intentionally uses AI to offer opportunities and rewards for shareholders, customers, colleagues and partners.
This is how Brainfood sees the emerging model of AI empowered organisations. In contrast to the unfolding narrative that AI’s commercial value is all about deprecating the human workforce for something better. Our vision for AI’s value is different.
If this aligns with your own instincts, let’s start to talk about how Brainfood can help you make that happen. Thanks for your attention and interest.
Frequently Asked Questions
What is anthropomorphism in AI?
Anthropomorphism in AI is the tendency to attribute human characteristics, emotions, or intentions to artificial intelligence systems.
Why is AI transparency important?
AI transparency ensures users understand they are interacting with a machine, reducing the risk of over-trust and ethical breaches.
What are the main AI trust risks?
AI trust risks include users overestimating AI’s capabilities, forming emotional attachments, and disclosing sensitive information to systems that cannot reciprocate or safeguard their interests.
How can organisations ensure LLM safety?
By setting clear boundaries, prioritising transparency, educating users, and regularly auditing AI interactions, organisations can mitigate the risks associated with humanising AI.