Research: AI browsers pose a systemic risk of "indirect prompt injection"

PANews reported on October 24th that, according to simonwillison.net, research has revealed a systemic "indirect prompt injection" risk in AI browsers. The Brave team demonstrated that Perplexity's Comet browser could be tricked into automatically accessing account details and exfiltrating data through external links via invisible commands embedded in screenshots. Fellou, however, was even more serious, with page text tricking it into opening Gmail and sending the latest email headers to an external site. Both instances involved executing without user confirmation and involved concerns about email and financial security. Brave has not clarified whether these vulnerabilities have been addressed by the vendor.

OpenAI's Chief Information Security Officer, Dane Stuckey, published a lengthy article revealing the ChatGPT Atlas agent's protection against prompt injection: through red team testing, training rewards to ignore malicious commands, overlapping security fences, and attack detection blocking; he proposed "defense in depth" and acknowledged that prompt injection remains an unsolved cutting-edge problem.

Share to:

Author: PA一线

This content is for informational purposes only and does not constitute investment advice.

Follow PANews official accounts, navigate bull and bear markets together
App内阅读