The “Open Source Pioneers” organization, or OSI, published its official definition of open source artificial intelligence. According to this new definition, most of the AI models of tech giants, such as Meta, are not open source. Meta has also reacted to this definition.
According to the OSI declaration, for an AI model to be considered truly open source, it must provide the following features:
- Provide access to the details of the data used in AI training so that others can recreate it.
- The complete codes used in the construction and implementation of artificial intelligence should be available.
- Publish the training settings and weights that help the AI produce its results.
The new definition of open source artificial intelligence and its challenge with meta
OSI’s definition of open source artificial intelligence directly challenges Llama Meta models; Meta puts special emphasis on the open source nature of its models, saying they are the largest open source AI models. Of course, Llama is publicly available for download and use, but it has restrictions for commercial use (for apps with more than 700 million users) and does not provide access to educational data.
In other words, Llama Meta models do not comply with the new OSI standards and cannot be considered open source.
Meta’s spokesperson also told The Verge in response to the definition of the open source pioneers organization: “We agree with OSI on many of these issues, but we disagree with this definition.” He says that there is no single definition of open source artificial intelligence, and defining such a thing at all is problematic; Because these definitions do not capture the complexities of today’s rapidly evolving AI models.
“We will continue to work with OSI and other industry groups to make artificial intelligence more accessible and accountable, regardless of technical definitions,” said a Meta spokesperson.
For 25 years, many developers have accepted the OSI definitions of open source software; Developers who want to develop each other’s work without fear of lawsuits or licensing hassles. Now that artificial intelligence has changed the technology landscape, tech giants are faced with a fundamental dilemma: either embrace these established principles or reject them.
Meta says it restricts access to training data because of safety concerns, but critics posit a simpler motive: to minimize legal liability and maintain the company’s competitive advantage. We generally know that many AI models are almost certainly trained on copyrighted material; Perhaps for this reason, Meta is against the new definition of OSI.
RCO NEWS