Google is reportedly partnering with Anthropic’s Claude to rectify and provide better responses for Gemini. Anthropic’s chatbot Claude is claimed to be used by Google to enhance the replies given by its internal AI Gemini. It shows contractors of the tech giant answers created by Gemini and Claude for user input.
The procedure requires contractors to get answers from both AI models regarding particular stimuli. They then have up to 30 minutes to review and rank each response based on their quality survey. Such feedback helps Google identify certain parts of Gemini that might require enhanced development.
However, contractors have realized some odd behaviors during such evaluations. On occasion, Gemini’s response has been followed, for example, with such remarks as: “I am Claude, created by Anthropic;” such a remark suggests the integration between one model and another.
Safety Standards set by the Contractors
According to the TechCrunch report, this conversation showed contractors noting Claude’s responses to emphasize safety compared to Gemini. “Claude currently has the lowest level of allowed risk for a word to be generated from the entire set of AI models,” noted one contractor.
Occasionally, Claude would ignore prompts deemed risky, such as simulating being another AI companion. In another, Claude did not answer a prompt, and Gemini almost received a ‘huge safety violation’ due to the use of ‘nudity and bondage.’.
These comparisons are done within an internal tool that Google developed where contractors can compare different AI models. However, the use of Claude in the training raises eyebrows, especially from Anthropic, which has taken a stand of not allowing the use of Claude in the training of competing AI models through its terms of service. Still, it is not clear whether or not this limitation applies to Google since Anthropic received its funding.
To do this, Google DeepMind’s information officer, Shira McNamara, broke the ice and provided the necessary information in response to the speculation. She stressed that it is common to compare different AI models and the rather necessary step in order to increase the effectiveness of the AI models being used. McNamara categorically dismissed any contention that Google has trained Gemini using Anthropic’s Claude, as such claims are untrue.
Such comparison is done through an internal platform at Google that allows contractors to compare different AI systems. However, the presence of Claude has been raising controversy. Claude contains provisions in the form of Anthropic’s terms of service aiming to prevent Claude from being used to train other products of the AI competition.
In addition to that, contractors have complained about being forced to assess prompts within their extent of experience, including healthcare-related issues of concern. This has put doubt on the credibility of Gemini’s outputs in the relevant areas of specialization and another layer of prof scrutiny on Google.