Study Finds ChatGPT Conversations Reveal Users’ Personality Traits

Key Points
- ETH Zurich researchers paired 62,090 ChatGPT conversations with self‑reported personality test scores.
- A fine‑tuned AI model predicted openness, conscientiousness, extraversion, agreeableness and neuroticism above random chance.
- Extraversion prediction improved up to 44% over guessing, especially in mental‑health‑related chats.
- Topic‑specific cues—religion, mood, mental state—enhanced predictions for other traits.
- More frequent ChatGPT use increased profiling accuracy, raising concerns for the 800 million‑user base.
- Authors warn that personality profiles could enable targeted ads, persuasion or influence campaigns.
- Deleting chat history can reduce the amount of data available for profiling.
- OpenAI has not commented; the study adds to ongoing debates about AI privacy and ethics.
Researchers at ETH Zurich analyzed more than 62,000 real‑world ChatGPT exchanges and showed that an AI model can predict the five major personality dimensions—openness, conscientiousness, extraversion, agreeableness and neuroticism—with accuracy well above chance. The work, posted on arXiv, suggests that routine chats, even on casual topics, contain enough signal for profiling, raising fresh concerns about privacy and the potential for targeted manipulation.
ETH Zurich scientists have demonstrated that everyday interactions with OpenAI’s ChatGPT can be turned into a surprisingly accurate personality profile. By pairing 62,090 anonymized conversations from 668 volunteers with the participants’ self‑reported scores on a standard five‑factor inventory, the team trained a fine‑tuned model to classify each user’s traits as low, medium or high. The model outperformed random guessing on all five dimensions, with extraversion standing out as the easiest to infer—up to 44 percent better than chance.
The researchers discovered that certain discussion topics sharpen the model’s predictions. Chats that touched on mental‑health issues boosted the accuracy for extraversion, while religious conversations correlated strongly with conscientiousness. References to mood or mental state made openness more discernible. In short, even seemingly innocuous dialogue carries enough behavioral cues for a machine to read a user’s inner makeup.
Frequency of use also mattered. The more a participant engaged with ChatGPT, the clearer the personality signal became. This finding underscores a broader implication: as the platform’s user base swells—over 800 million monthly active users as of January 2026—the collective data pool could enable large‑scale profiling at unprecedented scale.
According to the study’s authors—Derya Cögendez, Verena Zimmermann and Noé Zufferey—the results raise alarms for service providers that already harvest conversational data. They warn that a detailed personality profile could be weaponized for hyper‑targeted advertising, personalized persuasion, or even coordinated influence campaigns. The authors stress that users should treat ChatGPT as anything but a private diary.
While the paper does not propose immediate policy changes, it points to practical steps individuals can take. Deleting chat history regularly, for instance, would remove recent interactions from the model’s training set, limiting the amount of personal information retained. The authors also call for greater transparency from AI developers about how conversational data is stored and used.
Industry observers have noted that the study arrives at a time when OpenAI and other firms are experimenting with monetization features, such as in‑app advertising. If advertisers gain access to personality‑based segments derived from chat logs, the line between relevant content and manipulation could blur further. The researchers argue that the ethical stakes are high and call for a dialogue among technologists, regulators and the public.
In response to the findings, OpenAI has not yet issued a formal comment. Nonetheless, the paper adds to a growing body of work that examines the hidden privacy risks of large language models. As AI assistants become more embedded in daily life, the balance between convenience and personal data protection will likely shape the next wave of policy and product design.