Google is facing a lawsuit from diversity and ai-corporate activist Robbie Starbuck, who claims that Google’s artificial ielligence falsely linked him to allegations of sexism and extremism. Unlike Meta, who settled his similar case out of court, Google has decided to pursue this lawsuit in court.
According to reports, the company has stated in legal documes that it should not be held liable for coe generated by artificial ielligence models that may be hallucinating, and is seeking to have the lawsuit dismissed. The original lawsuit begins with a lawsuit filed by Robbie Starbuck against Google, claiming that Google’s artificial ielligence linked him to sexual accusations and extremist tendencies.
Lawsuit against Google due to AI statemes
Starbuck previously filed a similar lawsuit against Meta, claiming that Meta’s artificial ielligence mistakenly ideified him as someone who was prese during the January 6 riots in the US Congress. However, Meta took a completely differe path, settling the case in August and even hiring him as a consulta to address political and ideological biases in its AI systems.
According to the Wall Street Journal report, no American court has ever ordered compensation for the statemes of an artificial ielligence chatbot, and this case could be one of the first examples in determining the boundaries of legal responsibility of companies against coe produced by artificial ielligence.


In its new case, Starbuck has demanded 15 million dollars in damages from Google. But Google responded by stating that his claim was basically just an indication of his improper use of developer tools and inciting the chatbot to express delusions. Google also says Starbuck didn’t even explain what commands it eered and whether anyone was actually affected by the commes. At this poi, Starbucks has not responded to a request for further clarification.
Although Google could have ended the case with a simple agreeme out of court, it has chosen a differe path and preferred that everything be examined in the official atmosphere of the court and under the judge’s supervision. This decision can be a turning poi in determining the limits of companies’ responsibility for the behavior and outputs of artificial ielligence, and maybe even create a new standard based on which it is determined to what exte iellige models should be monitored and accouable.
The curre case for Google is not just an ordinary lawsuit, but an example of the direct confroation of the law with emerging technologies, where issues related to personal privacy, the social consequences of algorithms, and the shortcomings of existing laws are raised simultaneously. The end result of this process could redefine the future direction of AI legislation in the US and create a new framework for how to deal with claims attributed to AI systems.



