TechCrunch says it has seen correspondence showing that coractors working to improve Google’s Gemina AI capabilities are comparing the tool’s test results with those of Claude.
This has led to the suspicion that Google is using Claude’s artificial ielligence models from Ahropic to improve its AI chatbot. Despite TechCrunch asking Google about the company’s licenses to use eropic models, the tech gia has not responded.
According to Eropy’s policies, the company’s customers are prohibited from accessing Claude “to build a competing product or service” or to “train competing AI models” without Eropy’s approval.
Claude’s AI responses are apparely safer than Gemina’s

As tech companies compete to build more advanced AI models, it’s natural to wa to compare the results of their models to competing models, but this comparison is usually done through industry benchmarks rather than hiring coractors to compare the results to competing AI results.
The coractors hired to improve Gemina are tasked with evaluating the accuracy of the outputs and must score each answer based on criteria such as honesty and verbosity. According to leaked correspondence, each coractor has 30 minutes for each prompt to determine whether Gemina’s or Claude’s answer is better.
Also, this correspondence shows that coractors have seen references to Claude’s artificial ielligence model in a platform Google has built to compare these models.
Additionally, an iernal conversation revealed that coractors believe Claude’s answers emphasize safety more than Gemina’s. Apparely, one of the coractors said that Eropic AI security settings are the strictest among AI models. Also, Claude has not responded to unsafe requests in certain cases.



