Vitalik: AI should not have free access to personal data; it needs to be localized, sandboxed, and subject to dual human-machine verification.

On April 2nd, PANews reported that Ethereum co-founder Vitalik Buterin published an article detailing his exploration of localized, private, and secure solutions for personal AI use. He pointed out that the current AI field (including local open-source AI) is extremely lax in terms of privacy and security. For example, OpenClaw agents can modify critical settings without human confirmation, malicious external input can easily take over user instances, and some skills contain malicious instructions. Vitalik advocates that all LLM inference and documentation should be localized first, with everything sandboxed for isolation. He tested hardware such as an NVIDIA 5090 laptop and an AMD Ryzen AI Max Pro, using the Qwen3.5:35B model, running through llama-server, and using the NixOS system. He used Pi as the agent framework and restricted LLM access permissions through the bubblewrap sandbox. He also developed a messaging daemon that strictly limits LLM to reading messages and sending messages to itself, requiring human confirmation before sending messages to others. Vitalik believes that humans and LLMs have different failure modes, and a dual-confirmation mechanism combining both is more secure than relying on either one alone. He called for a multi-layered defense mechanism, including zero-knowledge API calls, hybrid networks, TEE inference, and input cleaning, and suggested making every paid API a ZK-API. He emphasized that, if developed properly, AI can create a more robust privacy and security future.

Share to:

Author: PA一线

This content is for market information only and is not investment advice.

Follow PANews official accounts, navigate bull and bear markets together
PANews APP
US stocks closed mixed, with crypto stocks generally declining.
PANews Newsflash