Amazon has created a special team to focus on the development of its most challenging and advanced large language models. This internet sales giant plans to work on its biggest artificial intelligence project with the help of this team.
Amazon has formed a core team to work on its biggest and most ambitious AI project to date, according to Hoshio.
According to an AI business source with knowledge of the technology’s development, the new group, which will work on developing large-scale language models (LLM), led by Rohit Prasad; The chief scientist of Amazon’s Alexa digital assistant has been formed. This news was first published by Insider news and analytical media.
Amazon now joins the ranks of other US tech giants that are actively developing AI research, applications and platforms. The group previously included companies such as Google, Microsoft and Meta (formerly Facebook).
Amazon’s CEO Andy Jassi announced in an email that Alexa’s chief scientist; Prasad and his team will be responsible for the development of Jassi’s most extensive large language model, acting as Amazon’s “core team” and Prasad will report directly to Jassi’s CEO on the development of the project.
“Although we have already built several large language models in-house and have several more LLMs in development, we want to focus on developing our most ambitious LLMs by bringing together the resources and expertise at Amazon,” Jassi wrote.
In early April, Amazon Web Services AWS unveiled Bedrock, which provides access to basic AI models, including Amazon’s Titan models and models from other companies, through an API. A month later, Jassi revealed that Amazon is building a “much bigger, more general and more capable” language model for Alexa to enhance the customer experience across its businesses.
It is worth noting that in 2022, Amazon unveiled a 20 billion parameter language model called Alexa Teacher Model 20B, which supported several different languages. The company’s researchers said that this model outperforms OpenAI’s GPT-3, which has 175 billion parameters, in linguistics. (The latest OpenAI model is GPT-4, which is said to have 1.7 trillion parameters.) Accordingly, Alexa Teacher Model 20B performs better than GPT-3 in linguistics despite having fewer parameters.
RCO NEWS