At the heart of San Francisco, where one day, Slack, an emerging $ 2 billion company is trying to build a new world based on artificial intelligence. “Anthropic”, the ardent rival of Openai, was founded by Dario Amudi and his sister Danila; Those who sit at the Final Fantasy VII console on Sunday, and the next day in the glass rooms of their company, they think about a future in which cars are smarter than all humans.
Dario Amudi, former computational biologist and the main architect of the ChatGPT original language model, confidently says “Artificial Farahvash” can arrive by next year; Systems that are capable of the Nobel Prize winners. “Artificial intelligence will be better at all in everything,” he says.
But Audi believes that this future will not necessarily be catastrophic, if we are ready for him now. From the outset, he and his colleagues were commissioned to design and find the worst scenarios to abuse artificial intelligence. The result of this cautious approach was a conceptual development called “constituent AI”; A model trained based on the Human Rights Charter and the laws of companies such as Apple, which is monitored by a second artificial intelligence so as not to cross the red line.
Although Anthropic calls itself the “artificial intelligence safety lab”, it is moving at a staggering speed towards developing new models and influencing the global market. Claude chat robot, which is the company’s main product, now overtakes ChatGPT in many industrial tests and even has independent capabilities such as mouse control and automatic work. But not everyone is satisfied with this speed.
Jack Clark, head of the company’s policymaking, warns that the “shocking moments” will be launched next year. “The frog is cooking,” she says. “People have noticed slow but massive changes, and now it’s time to face reality.”
In the meantime, the strength of Anthropic is that it is a defender of a safe, transparent and responsible approach. Even a program called “Responsible Development Policy” has developed that if their artificial intelligence models show dangerous or catastrophic behaviors, the training process will be stopped immediately.
But the question is: How much can a company with billions of dollars like Amazon, Google and even Sam Bankman-Freed, how much can it adhere to moral aspirations? Clark’s answer is interesting: “If you want to be effective in policy making, you must first be a successful company.”
On the other hand, Dario Amudi also offers a promising image. In a detailed article entitled “Merciful Machines”, he depicts a scenario in which artificial intelligence is not only threatened, but also a tool for mutation in science, treatment of diseases, extends life span and even cancer mystery. “Artificial intelligence is a country full of geniuses in a center,” he says.
Mike Kariger, one of the founders of Instagram, now a product manager, warns a realistic view that, like social networking experience, it may show the profound effects of artificial intelligence years later. But unlike social networks, the artificial intelligence curve is much faster. And in the last word, Dario says, “At each stage, there are risks.”
RCO NEWS