Contrary to popular belief, text-generating AI models are not intelligent beings with personalities. These models are actually advanced statistical systems whose task is to predict the most likely next word in a sentence. But like apprentices in a demanding workplace, they follow a set of rules called “system orders.” These directives define the functional bases of the models and define their dos and don’ts.
Literacy
All AI companies, from OpenAI to Entropic, use system directives to prevent (or at least reduce) inappropriate behavior from models and set the overall tone of their responses. For example, an instruction might tell the model to always be polite, but never apologize or be honest about the limitations of its knowledge.
However, these companies usually keep these instructions secret due to competition and also to avoid discovering ways around them. For example, GPT-4 model system commands are only revealed through PROMPT INJECTION attacks, and even then, the model output is not completely reliable.
But Entropic company, trying to introduce itself as an ethical and transparent company in the field of artificial intelligence, the system commands of its new models (Cloud 3 Opus, Cloud 3.5 Sunt and Cloud 3 Haiku) in mobile cloud applications for Android and iOS operating systems. iOS and has published its website.
Alex Albert, director of developer relations at Entropic, announced on August 26, 2024 that the company plans to make updating these commands a routine process; Because these commands change with the changes and new settings of the models. Such a thing is like parents giving others their educational methods and how to teach good manners and behavior.

Robot artificial intelligence
The new Claude 3 system instructions, released on July 12th, make it clear what these models are not capable of. For example, they cannot open URLs, links, or videos. Facial recognition is also completely prohibited; The claude opus system command tells the model to always respond as if it were completely color blind and to avoid identifying or naming people in the images.
These commands also specify certain properties for the models; Features that Entropic wants the cloud to showcase. For example, the order tells Cloud 3 Opus to appear intelligent and inquisitive, enjoy hearing from humans, and converse about a wide range of topics. Claude is also instructed to maintain neutrality and objectivity on controversial topics and to provide “accurate thoughts” and “clear information”, and never to start his answers with words like “certainly” or “absolutely”.
These system commands, which are like a character analysis sheet for a dramatic actor, seem strange to many people. The order for Opus ends with “Cloud now connects to a human,” giving the impression that Cloud is some kind of self-aware entity whose sole purpose is to satisfy the desires of its human interlocutor.
However, this is merely an illusion. If Cloud Commands has taught us anything, it’s that these models are nothing more than scary whiteboards without human guidance. With these new changes to cloud system commands, the first of their kind for a major AI company, Entropy has increased the pressure on competitors to release these commands. It remains to be seen whether this tactic will be effective.


RCO NEWS