OpenAI has announced that artificial intelligence browsers are always vulnerable to prompt injection attacks Prompt Injection and this threat, even with extensive efforts to increase browser security ChatGPT AtlasIt probably won’t go away completely.
This type of attack involves manipulating AI agents to execute malicious instructions, usually hidden in web pages or emails, and shows that the safe use of AI agents in the open environment of the Internet is still fraught with serious problems. A new OpenAI blog post emphasizes that “prompt injection, like scams and social engineering on the web, will likely never be fully resolved” and that enabling Agent Mode in ChatGPT Atlas raises the level of security threats.
OpenAI says the threat of prompt injection is ever-present
The ChatGPT Atlas browser was released in October, and security researchers quickly demonstrated that even typing a few words into Google Docs could change the browser’s behavior. That same day, Brave warned that indirect prompt injection was a systemic problem for AI browsers like Perplexity Comet. The UK Cyber Security Center has also stated that these attacks may never be fully mitigated, leaving websites exposed to data leaks, so it is recommended to focus on reducing the effects and risks, rather than stopping the attacks altogether.

OpenAI considers this to be an ongoing security challenge and has emphasized that continuous strengthening of defense systems is essential. The company’s solution includes a rapid and proactive response cycle that seeks to identify new cyber attack strategies before they occur within the company. One of OpenAI’s main tools is the “LLM-based Auto-Attacker”, which is trained with reinforcement learning and plays the role of a hacker who tries to pass malicious instructions to the AI agent. The tool is able to test the attack first in a test simulation, examine the target agent’s reaction, modify the attack, and re-execute it to identify weaknesses faster than a real attacker.
Additionally, OpenAI advises users to give AI agents clear instructions and limit full access to email or sensitive data, as too much leeway can expose the agent to malicious content even with security mechanisms in place. Although it is difficult to fully protect against prompt injection attacks, OpenAI tries to strengthen the security of systems before real attacks occur with extensive testing and rapid update cycles. Analysts warn that agent-based browsers still don’t offer enough value to justify the risk of accessing sensitive data, and balancing the pros and cons remains a real challenge.
RCO NEWS


