Author: 0xResearcher

Manus achieved SOTA (State-of-the-Art) results in the GAIA benchmark, showing that its performance surpasses Open AI's large models of the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, which involve contract clause decomposition, strategy prediction, solution generation, and even coordination with legal and financial teams. Compared with traditional systems, Manus's advantages lie in its dynamic target decomposition capabilities, cross-modal reasoning capabilities, and memory-enhanced learning capabilities. It can decompose large tasks into hundreds of executable subtasks, process multiple types of data at the same time, and use reinforcement learning to continuously improve its decision-making efficiency and reduce error rates.

Manus brings the dawn of AGI, but AI safety is also worth pondering

While marveling at the rapid development of science and technology, Manus also once again sparked disagreements within the industry about the evolution path of AI: Will AGI dominate the world in the future, or will MAS be collaboratively dominant?

This starts with Manus' design concept, which implies two possibilities:

One is the AGI path, which continuously improves the intelligence level of individual units to make them close to the comprehensive decision-making ability of humans.

Another is the MAS path, which acts as a super coordinator and directs thousands of vertical field agents to work together.

On the surface, we are discussing different paths, but in fact, we are discussing the underlying contradiction in the development of AI: how should efficiency and safety be balanced? The closer the single intelligence is to AGI, the higher the risk of black-box decision-making; and although multi-agent collaboration can disperse risks, it may miss the key decision window due to communication delays.

The evolution of Manus has invisibly magnified the inherent risks of AI development. For example, the data privacy black hole: in medical scenarios, Manus needs to access patient genomic data in real time; in financial negotiations, it may touch on the company's undisclosed financial information; for example, the algorithm bias trap, in recruitment negotiations, Manus gives below-average salary recommendations to candidates of a certain ethnicity; in legal contract reviews, the misjudgment rate of emerging industry clauses is nearly half. Another example is the adversarial attack vulnerability. Hackers implant specific voice frequencies to make Manus misjudge the opponent's offer range during negotiations.

We have to face a terrible pain point of AI systems: the smarter the system, the wider the attack surface.

However, security is a word that has been mentioned repeatedly in web3. Under the framework of Vitalik’s impossible triangle (the blockchain network cannot achieve security, decentralization and scalability at the same time), a variety of encryption methods have been derived:

  • Zero Trust Security Model: The core concept of the Zero Trust Security Model is "Don't trust anyone, always verify", that is, whether the device is located in the internal network or not, it should not be trusted by default. This model emphasizes strict authentication and authorization of each access request to ensure system security.
  • Decentralized Identity (DID): DID is a set of identifier standards that enable entities to be identified in a verifiable and persistent manner without the need for a centralized registry. This enables a new decentralized digital identity model, often compared to self-sovereign identity, and is an important part of Web3.
  • Fully Homomorphic Encryption (FHE) is an advanced encryption technology that allows arbitrary calculations to be performed on encrypted data without decrypting the data. This means that a third party can operate on the ciphertext, and the result obtained after decryption is consistent with the result of the same operation on the plaintext. This feature is of great significance for scenarios that require calculations without exposing the original data, such as cloud computing and data outsourcing.

In multiple rounds of bull markets, a number of projects have tackled the zero-trust security model and DID. Some of them have achieved success, while others have been drowned in the encryption wave. As the youngest encryption method, Fully Homomorphic Encryption (FHE) is also a powerful weapon to solve security problems in the AI ​​era. Fully Homomorphic Encryption (FHE) is a technology that allows computing on encrypted data.

How to solve it?

First, at the data level. All information entered by the user (including biometrics, voice intonation) is processed in an encrypted state, and even Manus itself cannot decrypt the original data. For example, in medical diagnosis cases, the patient's genomic data is analyzed in ciphertext throughout the process to avoid biological information leakage.

Algorithm level. Through the "encrypted model training" achieved by FHE, even developers cannot spy on the decision path of AI.

At the collaborative level, multiple Agents use threshold encryption for communication, and a single node being compromised will not lead to global data leakage. Even in supply chain attack and defense drills, attackers cannot obtain a complete business view after infiltrating multiple Agents.

Due to technical limitations, web3 security may not be directly related to most users, but it is inextricably linked to indirect interests. In this dark forest, if you don’t try your best to arm yourself, you will never escape the identity of "leek".

  • uPort was launched on the Ethereum mainnet in 2017 and is probably the earliest decentralized identity (DID) project to be launched on the mainnet.
  • In terms of the zero-trust security model, NKN launched its mainnet in 2019.
  • Mind Network is the first FHE project to be launched on the mainnet, and has taken the lead in cooperating with ZAMA, Google, DeepSeek and others.

uPort and NKN are projects that I have never heard of. It seems that security projects are really not paid attention to by speculators. Let us wait and see whether Mind network can escape this curse and become the leader in the security field.

The future is here. The closer AI gets to human intelligence, the more it needs non-human defense systems. The value of FHE lies not only in solving current problems, but also in paving the way for the era of strong AI. On this treacherous road to AGI, FHE is not an option, but a necessity for survival.