The “Open Source Pioneers” organization, or OSI, published its official definition of open source artificial ielligence. According to this new definition, most of the AI models of tech gias, such as Meta, are not open source. Meta has also reacted to this definition.
According to the OSI declaration, for an AI model to be considered truly open source, it must provide the following features:
- Provide access to the details of the data used in AI training so that others can recreate it.
- The complete codes used in the construction and implemeation of artificial ielligence should be available.
- Publish the training settings and weights that help the AI produce its results.
The new definition of open source artificial ielligence and its challenge with meta
OSI’s definition of open source artificial ielligence directly challenges Llama Meta models; Meta puts special emphasis on the open source nature of its models, saying they are the largest open source AI models. Of course, Llama is publicly available for download and use, but it has restrictions for commercial use (for apps with more than 700 million users) and does not provide access to educational data.

In other words, Llama Meta models do not comply with the new OSI standards and cannot be considered open source.
Meta’s spokesperson also told The Verge in response to the definition of the open source pioneers organization: “We agree with OSI on many of these issues, but we disagree with this definition.” He says that there is no single definition of open source artificial ielligence, and defining such a thing at all is problematic; Because these definitions do not capture the complexities of today’s rapidly evolving AI models.
“We will coinue to work with OSI and other industry groups to make artificial ielligence more accessible and accouable, regardless of technical definitions,” said a Meta spokesperson.
For 25 years, many developers have accepted the OSI definitions of open source software; Developers who wa to develop each other’s work without fear of lawsuits or licensing hassles. Now that artificial ielligence has changed the technology landscape, tech gias are faced with a fundameal dilemma: either embrace these established principles or reject them.
Meta says it restricts access to training data because of safety concerns, but critics posit a simpler motive: to minimize legal liability and maiain the company’s competitive advaage. We generally know that many AI models are almost certainly trained on copyrighted material; Perhaps for this reason, Meta is against the new definition of OSI.



