TechCrunch says it has seen correspondence showing that contractors working to improve Google’s Gemina AI capabilities are comparing the tool’s test results with those of Claude.
This has led to the suspicion that Google is using Claude’s artificial intelligence models from Anthropic to improve its AI chatbot. Despite TechCrunch asking Google about the company’s licenses to use entropic models, the tech giant has not responded.
According to Entropy’s policies, the company’s customers are prohibited from accessing Claude “to build a competing product or service” or to “train competing AI models” without Entropy’s approval.
Claude’s AI responses are apparently safer than Gemina’s
As tech companies compete to build more advanced AI models, it’s natural to want to compare the results of their models to competing models, but this comparison is usually done through industry benchmarks rather than hiring contractors to compare the results to competing AI results.
The contractors hired to improve Gemina are tasked with evaluating the accuracy of the outputs and must score each answer based on criteria such as honesty and verbosity. According to leaked correspondence, each contractor has 30 minutes for each prompt to determine whether Gemina’s or Claude’s answer is better.
Also, this correspondence shows that contractors have seen references to Claude’s artificial intelligence model in a platform Google has built to compare these models.
Additionally, an internal conversation revealed that contractors believe Claude’s answers emphasize safety more than Gemina’s. Apparently, one of the contractors said that Entropic AI security settings are the strictest among AI models. Also, Claude has not responded to unsafe requests in certain cases.
RCO NEWS