Amazon has created a special team to focus on the developme of its most challenging and advanced large language models. This iernet sales gia plans to work on its biggest artificial ielligence project with the help of this team.
Amazon has formed a core team to work on its biggest and most ambitious AI project to date, according to Hoshio.
According to an AI business source with knowledge of the technology’s developme, the new group, which will work on developing large-scale language models (LLM), led by Rohit Prasad; The chief scieist of Amazon’s Alexa digital assista has been formed. This news was first published by Insider news and analytical media.
Amazon now joins the ranks of other US tech gias that are actively developing AI research, applications and platforms. The group previously included companies such as Google, Microsoft and Meta (formerly Facebook).
Amazon’s CEO Andy Jassi announced in an email that Alexa’s chief scieist; Prasad and his team will be responsible for the developme of Jassi’s most extensive large language model, acting as Amazon’s “core team” and Prasad will report directly to Jassi’s CEO on the developme of the project.
“Although we have already built several large language models in-house and have several more LLMs in developme, we wa to focus on developing our most ambitious LLMs by bringing together the resources and expertise at Amazon,” Jassi wrote.
In early April, Amazon Web Services AWS unveiled Bedrock, which provides access to basic AI models, including Amazon’s Titan models and models from other companies, through an API. A moh later, Jassi revealed that Amazon is building a “much bigger, more general and more capable” language model for Alexa to enhance the customer experience across its businesses.
It is worth noting that in 2022, Amazon unveiled a 20 billion parameter language model called Alexa Teacher Model 20B, which supported several differe languages. The company’s researchers said that this model outperforms OpenAI’s GPT-3, which has 175 billion parameters, in linguistics. (The latest OpenAI model is GPT-4, which is said to have 1.7 trillion parameters.) Accordingly, Alexa Teacher Model 20B performs better than GPT-3 in linguistics despite having fewer parameters.




