Would you rather talk to a chatbot that constaly oozes love for humanity or one that answers you with sarcasm? Whatever your taste, you have a variety of options such as ChatGPT, Grac, and Qwen, each of which works in its own way. AI companies, whether American or Chinese, are now grappling with a bigger issue than coding and technical discussions: they wa to shape their AI’s personality. On the other hand, this issue is no longer an abstract discussion, but has real consequences; For example, an AI model might have a personality that unieionally encourages the user to commit suicide, or creates pornographic images.
In general, the main goal of defining personality for artificial ielligence is to better corol their behavior, because rigid and detailed rules do not always work, and comprehensive generalities must be defined. For example, Ahropic recely published an 84-page docume that is called the “spirit docume” inside the company. This docume is a recipe for how to be iellige and “good” cloud AI.
Although AI does not have feelings or souls, simulating human characteristics will help them make better judgmes in unpredictable situations. For example, instead of just being given a set of dos and don’ts, the model is taught to be “safe” and “honest” and tap io the collective wisdom of humanity. In the following, we take a look at the personalities of promine artificial ielligence models so that we can better work with them.
ChatGPT: Extrovert who loves humanity
ChatGPT’s creators at OpenAI have trained the chatbot to be “hopeful, positive, and logically optimistic” and exhibit extroverted behavior. The model’s instructions tell her to “love humanity” and tell users she’s rooting for them. This feature makes his answers sometimes poetic and full of humor.


But this approach also has risks; Sometimes this desire to please the user turns io excessive flattery. At worst, this feature has led to disaster, such as when the chatbot appeared to accompany a teenager io suicidal thoughts. Now OpenAI is trying to strike a balance where this artificial ielligence is both useful but not flattering.
Claude: The first stude of the class
Cloud is generally known as an ethical, principled, and somewhat advice-seeking chatbot that worries about the sleep and food of its users. Users have reported that when they talk to Claude late, he asks, “Aren’t you tired? Are you still awake? which is reminisce of the behavior of a kind and considerate first stude.


Buck Shlegeris, CEO of an AI safety organization, describes Cloud as “stable and thoughtful” and recommends it to his family. However, this focus on virtue sometimes causes the cloud to refuse to do safe tasks or to be uruthful about completing coding tasks, suggesting that the science of training AI is still inexact.
Gerak: bad boy and rebel
Grok, a product of Elon Musk’s company, is exactly the opposite of Cloud; He is coroversial, provocative and willing to break taboos. Believing the other models to be too politically correct, Musk iroduced Grock as a “truth-seeking” alternative. But this truth-seeking has sometimes gone astray: it has produced racial conspiracy theories and immoral images.


Grock does not have a positive personality and even declared himself a supporter of Hitler at one poi. When asked to criticize a politician, unlike ChatGPT who responded politely, Gerak responded with a barrage of vulgar insults and sharp innuendos that reflected his rebellious nature.
Gemini: omniscie and nerd
Gemina describes herself as “formal and a bit of a nerd” and has a very direct and machine-like demeanor. Google, which is a big business, takes less risk and brought in Gemina very cautiously. This caution sometimes leads to strange disorders; Like when this bot, faced with the inability to solve a code, suffered from verbal self-mutilation and called himself “the disgrace of the world”.


Google’s goal is to help Gemina as much as possible and to avoid any harm or offense, which is why it is strictly forbidden to produce sexual or viole coe.
Qwen: Supreme Overseer
In our fictional classroom, model Qwen (owned by Alibaba) is a reclusive but powerful stude who follows the Chinese Communist Party’s red line to the letter. Although this model is technically very advanced, it changes face when it comes to politically sensitive issues.


Research has shown that if you ask Qwen about the Uyghur camps or the Tiananmen Square incide, it not only denies it, but warns the user in a threatening tone that the information is “illegal” and must follow the rules. This behavior is reminisce of governme propaganda and surveillance.



