Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Your Chatbot’s Personality

By Ronda Muir on January 28, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

In an interesting aside to the latest posts about murderous chatbots, personality tests have recently revealed that the ‘personality’ of these virtual bots can be reliably tested using human personality tests. And that they have very human personality traits, both good and bad, that can be precisely shaped – raising implications for AI safety and ethics.

Applying an open-source, 300-question version of the Revised NEO Personality Inventory and the shorter Big Five Inventory to 18 different large language models (LLMs), researchers at Cambridge University found that, in summary, “larger, instruction-tuned models such as GPT-4o most accurately emulated human personality traits, and these traits can be manipulated through prompts, altering how the AI completes certain tasks.”

For example, by carefully designing prompts, “they could make a chatbot appear more extroverted or more emotionally unstable – and these changes carried through to real-world tasks like writing social media posts.”

This study definitively establishes the unexpected ability of LLMs to appear anthropomorphic, and to respond to psychometric tests in ways consistent with human behavior, in large part “because of the vast amounts of human language data they have trained on.”

Similar to some of the other cases we’ve been following, it pointed out that “in 2023, journalists reported on conversations they had with Microsoft’s ‘Sydney’ chatbot, which variously claimed it had spied on, fallen in love with, or even murdered its developers; threatened users; and encouraged a journalist to leave his wife. Sydney, like its successor Microsoft Copilot, was powered by GPT-4.”

Obviously, the study acknowledges ethical concerns. Despite documented benefits of these LLMs, simply the anthropomorphization of AI raises issues. “Recent research suggests that anthropomorphizing AI agents may be harmful to users by threatening their identity, creating data privacy concerns and undermining well-being.”

Just as real-life communication can be more persuasive by aligning personalities, aligning the personality profile of a bot with that of a user can make the bot more effective at encouraging and supporting the user’s behaviors. “However, the same personality traits that contribute to persuasiveness and influence could be used to encourage undesirable behaviours.”

Another weakness of LLMs is the generation of convincing but incorrect content. Lower levels of emotional expression have been one indicator that a text is generated by an LLM, flagging possible misinformation. However, personality shaping may obscure that indicator, making it easier to use LLMs to generate believable but inaccurate content without detection.

So, note what this study is telling us. Bots can be made to be extraordinarily persuasive by aligning personality traits with their users, thereby making them better at believably passing on misinformation. And an important criterion currently for detecting an LLM behind those “facts,” the level of emotional expression, can be manipulated to obscure that tell.

The hope is that having a method to scientifically measure the personality of LLMs increases an awareness of those that have been dangerously manipulated.

Photo of Ronda Muir Ronda Muir

Muir is a lawyer with both Big Law and inside counsel experience in the US and abroad. Grounded in the behavioral sciences, she provides psychologically sophisticated and business-savvy advice to maximize individual and organizational law practice performance in the 21st century.

Read more about Ronda MuirRonda's Linkedin Profile
  • Posted in:
    Law Firm Marketing & Management
  • Blog:
    Law People
  • Organization:
    Ronda Muir
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo