Conversation with the chatbot: what influences our trust
On the bank's website or as a telephone provider's helpline: interactions between people and chatbots are now part of everyday life. Do we trust them? And what factors influence this? Psychologists at the University of Basel have investigated this.

"Hello ChatGPT, can you help me?" - "Of course, how can I help you? 😊" The back and forth between users and chatbots based on artificial intelligence (AI) quickly feels like a conversation with another person.
Dr. Fanny Lalot and Anna-Marie Bertram from the Faculty of Psychology at the University of Basel wanted to find out to what extent people trust AI chatbots and what this trust depends on. The focus was on text-based systems, i.e. not voice assistants such as Siri or Alexa, but platforms such as ChatGPT and Co.
In the study, test subjects were confronted with examples of conversations between users and the fictitious chatbot "Conversea". They then imagined that they themselves were interacting with Conversea. The results are published in the journal Journal of Experimental Psychology: General published.
Chatbot as an independent entity
Whether and how much we trust other people depends on various factors: Our own personality, the behavior of the other person and the respective situation all play a role. "Childhood influences how much we can trust, but it also takes a certain willingness to want to trust in the first place," says social psychologist Fanny Lalot. Characteristics that promote trust are integrity, competence and goodwill.
The study shows: What comes into play in interpersonal relationships also applies to AI systems. Competence and integrity in particular are important criteria as to why people perceive an AI chatbot as trustworthy, whereas benevolence is less important when the other two dimensions are addressed. "Our study shows that respondents attribute these characteristics directly to the AI, not to the company behind it. To a certain extent, they perceive AI as an independent entity," says Lalot.
There are also differences between personalized and impersonal chatbots. For example, if a chatbot addresses us by name and refers to previous conversations, the study participants rated it as particularly benevolent and competent. "They humanized the personalized chatbot. This increased their willingness to use the tool and share personal information with it," says Fanny Lalot. However, the test subjects did not attribute significantly more integrity to the personalized chatbot and overall trust was not significantly higher than in the impersonal chatbot.
Integrity is more important than goodwill
According to the study, integrity is more important for trustworthiness than goodwill. It is therefore important to develop the technology in such a way that integrity is the top priority. Designers should also take into account that personalized AI is perceived as more benevolent, competent and human in order to ensure the proper use of the tools. Lonely, vulnerable people in particular run the risk of becoming dependent on AI-based friendship apps, as other studies show.
Fanny Lalot emphasizes: "With our study, we are not making a statement about whether trust in a chatbot is good or bad." She sees AI chatbots as a tool that we have to learn to use, similar to the opportunities and risks of social media.
However, some recommendations for action can be derived from the results. "We project more into AI systems than is actually there," says Fanny Lalot. This makes trustworthy AI systems all the more important; a chatbot should neither lie to us nor approve of everything we say without any ifs or buts.
If an AI chatbot is too uncritical and confirms all statements, there is no reality check and there is a risk of an echo chamber, which in the worst case scenario could lead to someone isolating themselves from their social environment. "A friend would hopefully intervene at some point if someone develops ideas that are too crazy or immoral," says Lalot.
Betrayal by the AI?
In interpersonal relationships, a breach of trust has serious consequences for future interactions. Is this also conceivable with chatbots? "That's an exciting question. More research is needed to answer it," says Fanny Lalot. "I can certainly imagine that someone could feel betrayed if the advice of an AI had negative consequences."
Guidelines are needed that hold the development companies accountable. For example, an AI could show how it arrives at a result by disclosing the sources it uses, and it could say when it doesn't know something instead of making something up. (Noëmi Kern/University of Basel/yzu/swi)