As Google continues to refine its AI chatbot Bard, Alphabet has new guidelines for its employees to be wary of AI chatbots, even Bard.
According to Gizmodo, the internet search giant has advised its employees not to enter confidential information into chatbots such as ChatGPT or Bard. Alphabet is reportedly concerned about employees providing sensitive information to chatbots because there’s always the possibility that human reviewers on the other side of the chatbots will review their inputs. It is possible that these chatbots use the previous inputs for their training, and this issue carries the risk of information accumulation. Samsung also confirmed last month that the company’s internal information was exposed after its employees used ChatGPT.
An Amazon lawyer asked the company’s employees not to share their code with ChatGPT in January, and according to images released from Slack messages, the lawyer specifically asked employees not to provide any of Amazon’s confidential information to ChatGPT.
Apple also gave a similar order to its employees last month. Documents obtained by The Wall Street Journal show that Apple has banned its employees from using GitHub’s ChatGPT and Copilot. Some sources have stated that Apple, like other technology giants, wants to develop its big language model, and for this reason, it bought two artificial intelligence startups in 2020 for $200 and $50 million.
In March, Google introduced a competitor to ChatGPT called Bard. This chatbot uses Google’s internal artificial intelligence engine called LaMDA. Some time ago it was revealed that Sundar Pichai, the CEO of the Internet search giant has asked employees in all departments of his company to use Bard for two to four hours a day. The company delayed the release of its AI chatbot in the European Union after Irish regulators raised privacy concerns. The Irish Data Protection Commission also claims that Google and Bard are not complying with the country’s data protection regulations.
RCO NEWS