Politics Business Culture Technology Environment Travel World
Home Technology Feature
Technology · Exclusive

AI Can Predict Your Personality from Chat History, ETH Zurich Study Finds

AI Can Predict Your Personality from Chat History, ETH Zurich Study Finds
Technology · 2026
Photo · Kai Lindgren for European Pulse
By Kai Lindgren Technology Editor May 7, 2026 3 min read

Artificial intelligence can infer a person's personality traits from their chat history with surprising accuracy, according to a new pre-print study from researchers at ETH Zurich in Switzerland. The findings raise important questions about privacy and the potential for misuse as AI systems become more integrated into daily life across Europe.

The study, which has not yet been peer-reviewed, involved 668 ChatGPT users from the United States and the United Kingdom. Participants shared their chat histories with the researchers, who then trained an AI model to identify personality traits based on the content and topics of those conversations. In total, the team analysed over 62,000 chats.

How the AI Model Worked

The researchers focused on the so-called "Big Five" personality traits: agreeableness, conscientiousness, emotional stability, extraversion, and openness. They fine-tuned a language model to estimate how likely a user was to exhibit each trait. To verify the model's predictions, participants also completed a standard psychological assessment.

The results showed that the AI could accurately detect a user's personality traits with up to 61% accuracy. It performed particularly well on agreeableness and emotional stability, but struggled with conscientiousness. The model's accuracy improved when it had access to longer chat histories, suggesting that the more a person interacts with AI, the easier it becomes to profile them.

While the immediate risks to individuals may seem small, the researchers caution that the implications at scale are significant. "There are major risks at large scale," they note, pointing to the possibility that personality data could be exploited by malicious actors. For example, they warn of "large-scale manipulation campaigns spreading disinformation and/or political propaganda."

This study adds to a growing body of research on the capabilities and dangers of AI. In a separate study from the University of Edinburgh, researchers found that cybercriminals find AI tools disappointing, suggesting that the technology's potential for harm is still being understood. Meanwhile, other European research has shown that AI models can match or beat doctors in complex medical reasoning, highlighting the dual-use nature of these systems.

The ETH Zurich study underscores the need for robust data protection regulations, particularly as the European Union's AI Act moves toward implementation. The ability to infer personality traits from chat data could have implications for targeted advertising, political campaigning, and even employment screening. As AI systems become more widespread, the line between helpful personalisation and invasive surveillance may become increasingly blurred.

For now, the researchers emphasise that their work is a proof of concept. But they urge policymakers and the public to consider the potential consequences. "The technology is advancing faster than our understanding of its societal impact," they write. "We need to be proactive, not reactive."

More from this story

Next article · Don't miss

Croatia launches Europe's first commercial robotaxi service in Zagreb

A Croatian company has launched Europe's first commercial robotaxi service in Zagreb. The self-driving cars, backed by Uber and powered by Chinese firm Pony.ai, are currently operating in a limited rollout with a human operator still present.

Read the story →
Croatia launches Europe's first commercial robotaxi service in Zagreb