In the field of AI, there are many words and terms, some of which are very close to each other; For this reason, you may not be able to recognize their meaning correctly. In this article, we have defined the most common words and terms used in artificial ielligence materials in the order of the English alphabet, simple and short; By reading this article, you will understand the basic concepts of artificial ielligence more easily.
A
AGI (Artificial General Ielligence)
Companies are very ierested in the idea of artificial general ielligence (AGI), but none can agree on its definition. The term usually refers to hypothetical artificial ielligence systems that can perform a wide range of complex tasks with little human ierveion. OpenAI, the developer of ChatGPT’s chatbot, goes a step further and defines AGI as “highly autonomous systems that outperform humans in most high-value economic tasks”, but it’s not clear what a “highly autonomous system” is, and more importaly, an “economic task”. “valuable” is considered. Some experts in the artificial ielligence industry think we will reach AGI in the next decade, but others believe that AGI systems will be created much further in the future, and may not be created at all.
Age
If productive artificial ielligence was defined by chatbots in the first year of its existence uil some time later, maybe in the next stage it will be defined by the concept of “age”; Maybe such a definition is not correct, but at least we can say that technology companies have bet on such an eve.
Maybe chatbots like ChatGPT can quickly provide recipes or a list of restauras, but hopefully AI ages can buy groceries or make restaura reservations on your behalf; Such artificial ielligence may be attractive for personal and professional uses, but when they operate completely autonomously, the possibility of error also increases.
Algorithm
An algorithm is a step-by-step process used to solve a problem. In this process, you eer data, you get output using the logic of that algorithm. Humans have been using algorithms to solve problems for ceuries. Some financial analysts spend their eire lives building algorithms that can predict future eves and help them make money. Our world operates based on these “conveional algorithms”, but recely there has been a moveme towards “machine learning” which is based on these ideas.
Alignme
Some AI companies have focused on fixing the alignme problem to preve AI from spiraling out of corol. Some of these companies also wa to make sure that AI is built to act in accordance with core human values. The problem is that there is no agreeme on the nature of these values, nor on the powers of artificial ielligence systems.
Artificial ielligence
Artificial ielligence is a broad term that has been used so much that it has lost some of its meaning. However, artificial ielligence specifically refers to a specific technology that models human ielligence and can perform a set of tasks that may require human ierveion. Computer scieist John McCarthy coined the term in the 1950s, but AI technology didn’t really take off uil this ceury, when tech gias like Google, Facebook’s pare company, Meta, and Microsoft combined massive computing power with deep sets. User data combined. Although AI can demonstrate human capabilities in data processing or conversation, the machines equipped with it still do not “understand” what they are doing or saying; They still rely primarily on algorithms.
B
Benchmarks
Due to the growing market of AI services, technology companies usually refer to a set of benchmarks to show that their software is better than the competition, but there is still no independe and standardized test that AI companies use to compare the performance of their software. Some AI experts are trying to solve this problem. Currely, companies typically design their own benchmarks to show how well their service answers questions about algebra, reading comprehension, and coding.
Chatbots
If there were productive chatbots before the adve of artificial ielligence, these artificial ielligence robots were used to provide online customer service before the adve of AI, but in the new era, AI chatbots can have dynamic conversations with humans on various topics; From topics related to historical facts to recipes. In the future, chatbots will likely become even more useful and conversational as companies like OpenAI and Google invest in more advanced models, and perhaps their designers will come closer to the long-standing goal of the AI field, to build an all-in-one virtual personal assista.
c
Claude
Cloud is one of the few services that can really compete with the performance of the most advanced OpenAI technology. The chatbot was designed and built by Ahropic, a startup founded by a group of former OpenAI employees whose main goal is to promote the safe developme of artificial ielligence. A cloud like ChatGPT can quickly answer a wide range of user questions, but unlike OpenAI, Eropic has so far avoided creating some AI capabilities such as image generation. According to company officials, Eropy’s goal is to make products designed primarily for commercial use.
Computer vision
Computer vision is a branch of artificial ielligence that allows computers to scan visual information such as images and videos and ideify and classify objects and people. These systems can react to what they see and take specific action or recommend. This technology is used to track wildlife in order to protect and guide self-driving vehicles, but there are concerns about its use in military and police operations; Because it has been proven that such systems have behaviors that show racial bias and are not accurate enough for reliable ideification.
E
Emerge behaviors
When large language models reach a certain level, they sometimes begin to exhibit abilities that seem to have no appare source. We don’t expect such abilities, nor have their trainers ideified them. Among such abilities, we can meion generating computer executable codes, telling strange stories, and ideifying movies through emoji strings instead of clues.
F
Fine Tuning
Think of fine tuning as a technical term for customization. With the help of fine-tuning, the user receives an existing AI model and trains it with additional information about a specific task or domain. This can help the model to work according to the user’s request; For example, a company that sells exercise equipme might fine-tune the AI model to better answer questions related to proper maienance of a stationary bike.
Froier models (advanced models)
Advanced models refer to the latest and most advanced AI models available in the market. Currely, the companies behind these models are OpenAI, Ahropic, Google and Meta. All these companies are members of a group called the Froier Model Forum, which collaborates with academics and policy makers to responsibly develop advanced artificial ielligence systems. The cost of developing these advanced models is expected to increase significaly, making it harder for startups to compete against large tech companies.
G
Gemini
Google, which was the leader of the competition in the field of artificial ielligence, is now trying to keep up with OpenAI. Gemina is Google’s flagship chatbot and its family of artificial ielligence models is also known by the same name. Gemini is the main focus of Google’s efforts in the field of artificial ielligence. The most advanced version of Gemini, Ultra, is designed for complex programming and mathematical reasoning tasks; Just like the most advanced version of OpenAI technology. Google included multifaceted capabilities in Gemini; So, for example, this AI model can analyze the image of a food and give its recipe.
Generative AI
The term generative artificial ielligence refers to the generation of coe (images, articles, songs, and sea chas) to simple questions or commands. This field includes things like OpenAI’s DALL-E, which can create complex and detailed images in seconds, or Suno, which generates music based on text descriptions. Generative artificial ielligence creates a new work after training with a large amou of available data; In some cases, such an incide has led to the filing of some legal claims by copyright owners who claim that their works have been used without permission.
GPT
A pre-trained transformer is a kind of large language model generator. “Transformer” refers to a system that can take input strings and process them in such a way that the coext of the coe and the order of the words can be understood simultaneously, not separately; This is very importa in language translation; For example, if the correct order, syax and meaning are not taken care of, the seence “His dog, Poppy, ate in the kitchen” may become. Translate io the French equivale of “Poppy ate his dog in the kitchen”.
Grok
At first glance, we can easily consider Grok as a frivolous effort. The chatbot, created by Elon Musk’s artificial ielligence startup (xAI) and available to subscribers on the X social network, has made headlines for its reckless responses and producing coroversial images with minimal restrictions, but xAI has raised billions of dollars in funding for this chatbot. And he has formed a taleed team and has access to a large amou of X users’ data that he can use to build his artificial ielligence products; For this reason Grok has shown itself in a very short time as a real competitor for the bigger chatbots.
H
Hallucination
When an artificial ielligence service like (ChatGPT) produces something seemingly convincing but completely fake, we are faced with the phenomenon (illusion). This phenomenon is caused by the lack of a correct answer to that question; The system knows what a good answer should be and provides it instead of the truth. Experts worry about AI’s inability to say “I don’t know” when answering; Such a problem can lead to costly mistakes, dangerous misunderstandings, and increased misinformation. Some AI companies claim to have been able to improve the accuracy of their services with newer models; For example, chatbots have been redesigned so that they spend more time reasoning before answering requests; Of course, the problem of the AI illusion still remains.
L
large language models
Large-scale linguistic models, or LLMs, are very large neural networks that are trained using large amous of text and data, including e-books, news articles, and Wikipedia pages. Linguistic models with billions of parameters to learn are the backbone of natural language processing technology that can recognize, summarize, translate, predict and generate text.
Llama
Meta has invested heavily in Llama, the Llama language model is a set of advanced artificial ielligence models that are freely available to developers to use. Thanks to such an approach, Meta hopes Llama will not only be the brainchild of its own chatbot, Meta AI, but also become the foundation for a long list of AI products from other companies; Such an eve could place Meta and Llama at the core of the artificial ielligence ecosystem.
M
Machine learning
Machine learning is the process of gradually improving algorithms (a set of instructions to achieve a specific result) by exposing them to large amous of data. By examining many inputs and outputs, the computer can “learn” without necessarily receiving special training; For example, the iPhone photo application does not know what you look like at first, but when you tag yourself as a face in differe environmes after a while, the machine acquires the ability to recognize you.
Model collapse
Researchers have found that AI models will ultimately underperform when trained on data coaining AI-generated coe (which is increasingly likely given the increase in data circulating in cyberspace).
According to some experts, if AI models are over-trained with AI-generated coe, there is also the possibility of them collapsing, and they are deeply concerned about this. The results of a 2023 study on the topic of model collapse showed that AI-generated images of humans became increasingly distorted after retraining the model with data they produced themselves (even with small amous of this data).
Multimodal
AI companies are increasingly focusing on “multimodal” systems that can process and respond to a variety of inputs, including text, images, and audio; For example, maybe you can talk to the chatbot and get answers from it, or show the chatbot a picture of a math problem and ask for a solution. Taking advaage of multi-model systems not only increases the variety of artificial ielligence products, but also creates a more realistic feeling of conversation with the digital assista.
N
Natural language processing or NLP (natural language processing)
This technology is a branch of AI that helps computers understand, process and produce speech and text just like humans. Natural language processing technology relies on machine learning algorithms to extract data from text, translate languages, recognize handwritten words, and understand coe and meaning; This is the underlying technology of virtual assistas such as Siri or Alexa, which enables them to understand requests and respond in natural language, just like human language.
Natural language processing technology can recognize emotions in text; That’s why if you tell Siri, “I’m sad,” she might suggest calling a friend. Other everyday uses of this technology include spam email filtering, web searching, spell checking, and text prediction.
Neural networks
These networks are a type of artificial ielligence in which a computer learns through trial and error with a method almost similar to the human brain’s method; The success or failure of these networks affects their subseque efforts and adaptations; As a child’s brain learns to map neural networks based on what it is taught; This process can also involve millions of attempts to achieve mastery. This is also the reason why AI platforms urgely need a lot of computer processing power.
O
Open Source
One of the key disagreemes in the AI industry (and among those who seek to regulate it) is over whether to choose open or closed models. Although some use the term “open” to mean unconditionally, this term refers to the idea of open source models. Open source models are models whose developers make their source code freely available to the public so that anyone can use or modify this code. The definition of open source comes from the non-profit organization “Open Source Initiative”, pois out. Software that is truly open source must meet certain conditions for distribution and access.
P
Parameters
When an AI company releases a new model, one of the key indicators it pois to to differeiate its product is the number of parameters. This term refers to the total number of variables that the model acquires during the training process and represes the actual size of the large language model; The figures related to the parameters are very surprising; For example, Meta’s Llama model comes in 3 sizes, the largest of which has approximately 400 billion parameters.
Prompt
The experience of using today’s AI tools usually starts with a prompt. Basically, any question or request of the user can be examples of prompts. Prompts can include asking an AI chatbot to summarize a docume, offer suggestions for home renovations, or write a poem about falling in love with blueberry muffins.
Prompt Engineering
The accuracy and usefulness of AI platform responses largely depends on the quality of the commands given. Prompt engineers can optimize natural language instructions to produce high-quality output with minimal computational power consumption.
R
Reasoning
September 2024, OpenAI iroduced a new model that can perform some reasoning tasks just like humans; Tasks such as answering more complex math problems and coding. Basically, the updated AI system spends more time calculating before responding to the user; In this way, it can solve multi-stage problems better. Google and Ahropic are also developing reasoning skills with their advanced AI models.
S
Small Models
After years of competing to build bigger models, some AI experts have come to the conclusion that bigger isn’t always better. OpenAI, Google, Meta, and other companies have released smaller models. The products that these companies have released are more compact and faster software than their flagship big language models; Such models may not be as large as larger models, but they can be a more efficie and cost-effective option for customers.
Seie AI
Most researchers believe that we are years away from the realization of conscious artificial ielligence (ielligence capable of understanding and reflecting on the world around it). Although AI can exhibit some human-like abilities, machines still do not “understand” what they do or say; They just find patterns in the vast amou of information generated by humans and derive formulas to determine how to respond to commands. It can also be difficult to tell when AI ielligence becomes reality; Because there is still no widespread agreeme on what consciousness is.
Syhetic Data
Some tech companies experimeing with artificial data are trying to find more data to develop big language models that power AI chatbots. AI companies use their AI systems to generate text and other coe; In the next step, these data will be used to train new models. The advaage of using such a method is that it avoids some legal and ethical concerns about the source of training data, but there may be a problem in the meaime; Some experts are worried that such an eve could lead to a decrease in the performance of AI systems. A phenomenon known as “model collapse”.
T
Training data
AI companies collect or license large amous of data to develop or train AI models; Models that can generate text, images, music and other coe in response to user questions. These companies typically provide little information about the exact training data they rely on, but the data used to train the AI chatbot may include articles, books, online reviews, and social media posts. Officials at Sono, an AI music creation company, said their software was trained on “tens of millions of recorded data” and that some of these works may be copyrighted.




