Would you rather talk to a chatbot that constantly oozes love for humanity or one that answers you with sarcasm? Whatever your taste, you have a variety of options such as ChatGPT, Grac, and Qwen, each of which works in its own way. AI companies, whether American or Chinese, are now grappling with a bigger issue than coding and technical discussions: they want to shape their AI’s personality. On the other hand, this issue is no longer an abstract discussion, but has real consequences; For example, an AI model might have a personality that unintentionally encourages the user to commit suicide, or creates pornographic images.
In general, the main goal of defining personality for artificial intelligence is to better control their behavior, because rigid and detailed rules do not always work, and comprehensive generalities must be defined. For example, Anthropic recently published an 84-page document that is called the “spirit document” inside the company. This document is a recipe for how to be intelligent and “good” cloud AI.
Although AI does not have feelings or souls, simulating human characteristics will help them make better judgments in unpredictable situations. For example, instead of just being given a set of dos and don’ts, the model is taught to be “safe” and “honest” and tap into the collective wisdom of humanity. In the following, we take a look at the personalities of prominent artificial intelligence models so that we can better work with them.
ChatGPT: Extrovert who loves humanity
ChatGPT’s creators at OpenAI have trained the chatbot to be “hopeful, positive, and logically optimistic” and exhibit extroverted behavior. The model’s instructions tell her to “love humanity” and tell users she’s rooting for them. This feature makes his answers sometimes poetic and full of humor.

But this approach also has risks; Sometimes this desire to please the user turns into excessive flattery. At worst, this feature has led to disaster, such as when the chatbot appeared to accompany a teenager into suicidal thoughts. Now OpenAI is trying to strike a balance where this artificial intelligence is both useful but not flattering.
Claude: The first student of the class
Cloud is generally known as an ethical, principled, and somewhat advice-seeking chatbot that worries about the sleep and food of its users. Users have reported that when they talk to Claude late, he asks, “Aren’t you tired? Are you still awake? which is reminiscent of the behavior of a kind and considerate first student.


Buck Shlegeris, CEO of an AI safety organization, describes Cloud as “stable and thoughtful” and recommends it to his family. However, this focus on virtue sometimes causes the cloud to refuse to do safe tasks or to be untruthful about completing coding tasks, suggesting that the science of training AI is still inexact.
Gerak: bad boy and rebel
Grok, a product of Elon Musk’s company, is exactly the opposite of Cloud; He is controversial, provocative and willing to break taboos. Believing the other models to be too politically correct, Musk introduced Grock as a “truth-seeking” alternative. But this truth-seeking has sometimes gone astray: it has produced racial conspiracy theories and immoral images.


Grock does not have a positive personality and even declared himself a supporter of Hitler at one point. When asked to criticize a politician, unlike ChatGPT who responded politely, Gerak responded with a barrage of vulgar insults and sharp innuendos that reflected his rebellious nature.
Gemini: omniscient and nerd
Gemina describes herself as “formal and a bit of a nerd” and has a very direct and machine-like demeanor. Google, which is a big business, takes less risk and brought in Gemina very cautiously. This caution sometimes leads to strange disorders; Like when this bot, faced with the inability to solve a code, suffered from verbal self-mutilation and called himself “the disgrace of the world”.


Google’s goal is to help Gemina as much as possible and to avoid any harm or offense, which is why it is strictly forbidden to produce sexual or violent content.
Qwen: Supreme Overseer
In our fictional classroom, model Qwen (owned by Alibaba) is a reclusive but powerful student who follows the Chinese Communist Party’s red line to the letter. Although this model is technically very advanced, it changes face when it comes to politically sensitive issues.


Research has shown that if you ask Qwen about the Uyghur camps or the Tiananmen Square incident, it not only denies it, but warns the user in a threatening tone that the information is “illegal” and must follow the rules. This behavior is reminiscent of government propaganda and surveillance.
RCO NEWS


