Google is facing a lawsuit from diversity and anti-corporate activist Robbie Starbuck, who claims that Google’s artificial intelligence falsely linked him to allegations of sexism and extremism. Unlike Meta, who settled his similar case out of court, Google has decided to pursue this lawsuit in court.
According to reports, the company has stated in legal documents that it should not be held liable for content generated by artificial intelligence models that may be hallucinating, and is seeking to have the lawsuit dismissed. The original lawsuit begins with a lawsuit filed by Robbie Starbuck against Google, claiming that Google’s artificial intelligence linked him to sexual accusations and extremist tendencies.
Lawsuit against Google due to AI statements
Starbuck previously filed a similar lawsuit against Meta, claiming that Meta’s artificial intelligence mistakenly identified him as someone who was present during the January 6 riots in the US Congress. However, Meta took a completely different path, settling the case in August and even hiring him as a consultant to address political and ideological biases in its AI systems.
According to the Wall Street Journal report, no American court has ever ordered compensation for the statements of an artificial intelligence chatbot, and this case could be one of the first examples in determining the boundaries of legal responsibility of companies against content produced by artificial intelligence.

In its new case, Starbuck has demanded 15 million dollars in damages from Google. But Google responded by stating that his claim was basically just an indication of his improper use of developer tools and inciting the chatbot to express delusions. Google also says Starbuck didn’t even explain what commands it entered and whether anyone was actually affected by the comments. At this point, Starbucks has not responded to a request for further clarification.
Although Google could have ended the case with a simple agreement out of court, it has chosen a different path and preferred that everything be examined in the official atmosphere of the court and under the judge’s supervision. This decision can be a turning point in determining the limits of companies’ responsibility for the behavior and outputs of artificial intelligence, and maybe even create a new standard based on which it is determined to what extent intelligent models should be monitored and accountable.
The current case for Google is not just an ordinary lawsuit, but an example of the direct confrontation of the law with emerging technologies, where issues related to personal privacy, the social consequences of algorithms, and the shortcomings of existing laws are raised simultaneously. The end result of this process could redefine the future direction of AI legislation in the US and create a new framework for how to deal with claims attributed to AI systems.
RCO NEWS


