February 13, 1402 at 09:41
According to the advice of experts, it is better to be careful about the information we give when using GPT chat, which is one of the most used tools of Internet users.
GPT Chat is one of the artificial intelligence chatbots designed and developed by OpenAI. This chatbot has many fans among Internet users due to its ease of use and provision of various tools such as translation and text summarization. The company that created this chatbot says that in the design of this platform, all privacy laws have been observed and users can use it with confidence. But experts have another recommendation and inform about the risk of misuse of our information when using this chatbot. In the following, we will examine this claim of experts and other important recommendations.
GPT chat puts you at risk
It's been a while since OpenAI unveiled its GPT store. The use of this store for internet users is that they can design their favorite chatbot using JPT chat. The built chatbot will also be based on the same GPT chat; But the difference is that these chatbots can be equipped with various other professional tools that will be efficient for professional users.
Research shows that GPTs created under certain circumstances can reveal their programming algorithms and how they were built. These customizable chatbots do not have the ability to fully protect the information of their users and, according to the cases seen, by asking some questions, they have revealed data about how they were made, including the source used.
According to Alex Polyakov, CEO of Adversa AI (a network security company), many of the people who are currently building chatbots are ordinary Internet users who are not really aware of their low security due to their trust in OpenAI. Sam Altman, the CEO of OpenAI, disagrees with these statements and in a recent interview, he mentioned his company's vision for the future of technology that revolves around GPTs and asked all Internet users to design their own chatbot. But recently, after hearing the news about the security vulnerabilities of these chatbots, internet users refuse to build and design these GPTs.
Security vulnerability
The security vulnerabilities that are raised are related to a disorder called prompt leaking, in which other users of a chatbot can reveal how the chatbot is made by asking basic and specific questions. Leaking how to build a chatbot is one of the main reasons for preventing users from building a personal GPT, and if this process continues, it will definitely be a heavy financial loss for OpenAI.
In the following, we will introduce reasons that show that creating a personal GPT chat is not worth the risk.
Ability to copy GPT chat; Preventing making money from artificial intelligence chatbot
If you think that with a little programming knowledge, you can create a very efficient and useful chatbot for various industries and you are thinking of making money, we must say that you are very wrong. Hackers are lurking like this and after you finish building your chatbot, you will find your GPT hacked and its entire instructions copied. This copying feature is due to the inherent low security of these chatbots and there is nothing you can do about it. This is the first vulnerability that Alex Polyakov's team has discovered, and they found that hackers could completely copy the instructions of a custom GPT.
Any sensitive data you upload to your chatbot is at risk of being exposed
The second vulnerability Poliakov mentions is another quick leak where by asking some specific questions, important information and data loaded into this GPT can be revealed. For example, imagine a company that has launched a chatbot to improve the knowledge level of its private sector employees and uploaded sensitive data about its business. Asking tricky and strategic questions is the way to get at this loaded data.
This vulnerability shows that people who try to build a personal chatbot should not upload any sensitive and personal data to this chatbot. The result of this vulnerability, if not fixed, will be that no software developer will use these chatbots for their activities.
OpenAI seeks to fix security vulnerabilities in chatbots
Artificial intelligence chatbots have had security problems since the first days of design, and social networks have also had a full media reflection of these shortcomings. Users have asked the GPT chat with tricky and indirect questions, which normally, this chatbot is not allowed to answer such questions.
OpenAI's security department is always finding security vulnerabilities in its chatbots and fixing them. But the thing that exists is related to hackers who use even the narrowest ways to find their way into the security system and create flaws. Because of this, it may take several years for OpenAI to fix all the flaws and security vulnerabilities of its platforms.
The vulnerabilities that the Adversa AI team found in the company's GPT chatbots could cause major problems for Altman and the OpenAI design team. Users of these chatbots expect quick response and performance instead of this company has found problems. The advice we can have for all users is to avoid uploading and publishing your sensitive information as much as possible, especially in personal chatbots.
You can also tell us in the comments section what information you have shared with GPT Chat and if you have ever noticed a security flaw in it.
RCO NEWS