Growth Points Round 1️⃣ 1️⃣ Summer Lucky Grand Draw is on fire!
Draw now for your chance to win an iPhone 16 Pro Max and exclusive merch!
👉 https://www.gate.com/activities/pointprize?now_period=11
🎁 100% win rate! Complete simple tasks like posting, liking, commenting in Gate Post to enter the draw.
iPhone 16 Pro Max 512G, Gate hoodies, Sportswear, popular tokens, Futures Vouchers await you!
Collect just 2 fragments to easily redeem Gate merch—take your rewards home!
Ends on June 4th, 16:00 UTC. Try your luck now!
More info: https://www.gate.com/announcements/article/45185
Manus brings the dawn of AGI, and AI security is changing again
Author: 0xResearcher
Manus achieved SOTA (State-of-the-Art) performance in the GAIA benchmark test, demonstrating its performance surpasses Open AI's models at the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, involving contract term decomposition, strategic anticipation, solution generation, and even coordinate legal and financial teams. Compared to traditional systems, Manus excels in its ability for dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning. It can break down large tasks into hundreds of executable subtasks, handle various types of data, and continuously enhance its decision-making efficiency and reduce error rates using reinforcement learning.
Amidst the awe-inspiring pace of technological development, Manus has once again sparked debate in the industry about the evolutionary path of AI: will the future be dominated by AGI, or will MAS play a leading role through collaboration?
This has to start with Manus's design concept, which implies two possibilities:
One is the AGI path. By continuously improving the intelligence level of individuals to approach the comprehensive decision-making ability of humans.
There is also another MAS path. As a super coordinator, it commands thousands of vertical domain Agents to collaborate in combat together.
On the surface, we are discussing different paths of divergence, but in fact, we are discussing the underlying contradiction of AI development: how should efficiency and security be balanced? As individual intelligence approaches AGI, the risk of decision-making black boxing increases; while multi-agent collaboration can mitigate risks, it may miss key decision windows due to communication delays.
The evolution of Manus inadvertently amplifies the inherent risks of AI development. For example, there is a data privacy black hole: in medical scenarios, Manus needs real-time access to patients' genomic data; during financial negotiations, it may touch upon undisclosed financial information of enterprises; for instance, the algorithm bias trap, in recruitment negotiations, Manus may provide salary suggestions below the average level for candidates of certain ethnicities; in legal contract reviews, the misjudgment rate of terms in emerging industries is close to half. Another example is combating attack vulnerabilities, where hackers implant specific voice frequencies to make Manus misjudge the opponent's price range during negotiations.
We have to face a terrifying pain point of AI systems: the smarter the system, the wider the attack surface.
However, security is a term that has been constantly mentioned in web3, and under the framework of the 'impossible triangle' by Vitalik Buterin (blockchain networks cannot simultaneously achieve security, decentralization, and scalability), various encryption methods have also emerged:
In multiple bull markets, zero-trust security models and DID have a certain number of projects to overcome, some of them have succeeded, while others have been engulfed in the crypto wave. As the youngest encryption method, Fully Homomorphic Encryption (FHE) is also a powerful weapon to solve security issues in the AI era. Fully Homomorphic Encryption (FHE) is a technology that allows calculations on encrypted data.
How to solve?
First, at the data level. All information entered by users (including biological features, speech tones) is processed in encrypted form, and even Manus itself cannot decrypt the original data. For example, in a medical diagnosis case, the patient's genome data is involved in the analysis in ciphertext form throughout to avoid leakage of biological information.
On the algorithmic level. With 'encrypted model training' achieved through FHE, even developers cannot peek into the decision-making process of AI.
At the collaborative level. Multiple Agents communicate using threshold encryption, and compromising a single node will not lead to global data leakage. Even in supply chain attack and defense drills, attackers cannot obtain a complete business view after infiltrating multiple Agents.
Due to technical limitations, web3 security may not have direct contact with most users, but it has intricate indirect interests. In this dark forest, if you do not arm yourself with all your might, you will never escape the identity of 'leeks'.
uPort and NKN are projects that I have never heard of. It seems that secure projects are really not of interest to speculators. Can Mind network escape this curse and become a leader in the security field? Let us wait and see.
The future is already here. As AI approaches human intelligence, it increasingly needs a non-human defense system. The value of FHE lies not only in solving current problems but also in paving the way for the era of strong AI. On this treacherous path to AGI, FHE is not optional but a necessity for survival.