OpenAI Warns AI Browsers May Never Be Fully Secure

OpenAI has acknowledged that AI-powered browsers, including its ChatGPT Atlas browser, may never be completely immune to prompt injection attacks. These attacks, where malicious instructions are embedded in web pages or emails, can manipulate AI agents to perform unintended actions.

In a recent blog post, OpenAI highlighted that ChatGPT Atlas’s “agent mode” increases the attack surface, making complete security extremely difficult. The company likened prompt injection to social engineering scams, noting that this remains a persistent challenge in AI safety.

To mitigate risks, OpenAI recommends users limit the access granted to AI agents and require confirmations for sensitive actions such as sending messages or making payments. Providing specific instructions rather than broad, autonomous authority is also advised, as excessive permissions make agents more vulnerable to manipulation.

While AI browsers offer powerful capabilities, OpenAI cautions that the current security risks may outweigh the benefits for many users. Continuous improvements and defensive measures will be necessary to manage these evolving threats.

More From Author

Samsung Set to Launch New Foldable Phone to Rival Apple’s Foldable iPhone

Bluetooth 6.0 Explained: What It Means for Headphones and Earbuds