WhatsApp推Meta AI 無痕對話:號稱連Meta都看不了,但有一大隱憂

WhatsApp has launched “Meta AI Incognito Chat,” designed to ensure conversations aren’t stored and that officials can’t access them. Experts worry this could make it difficult to hold AI accountable for unintended outcomes. Meta says that in the initial phase, it only supports text and uses conservative protective measures.

Meta AI Incognito Chat launches, created in response to public privacy needs

As generative AI chatbots rapidly become popular, talking to AI has gradually become part of everyday life. But many of the questions people ask are highly sensitive—such as sharing private financial, personal, health, or work information—and at that point, privacy needs also arise.

For this reason, WhatsApp has introduced “Meta AI Incognito Chat,” claiming it is a brand-new feature for fully private conversations with AI.

Based on private-processing technology, Meta AI Incognito Chat enables users to have confidential chats with Meta AI in situations where others can’t see them. User messages are processed in a secure environment, inaccessible even to Meta itself.

These conversations are not stored, and by default the system automatically makes messages disappear, giving users a space to think freely and explore ideas without anyone watching.

Image source: WhatsApp WhatsApp introduces the Meta AI Incognito Chat feature, created in response to public privacy needs

Six core technologies behind Meta AI Incognito Chat

According to Meta’s technical white paper, Meta AI Incognito Chat mainly combines the following six core technologies to ensure privacy and security of conversation data:

  1. Confidential computing hardware: Uses AMD CPUs and Nvidia GPUs that support confidential computing to build a trusted execution environment (TEE), ensuring that computations are isolated at the hardware level. This includes that neither Meta nor the host operating system can access the data being processed.
  2. Authenticated and encrypted communication: The system uses RA-TLS to provide end-to-end encryption, ensuring that only the user device and private-processing nodes can decrypt, and through hardware-based authentication, it verifies whether the server is running an unmodified software stack.
  3. Artifact transparency: To prevent malicious software from being deployed, the system publishes important components such as binary files and model weights to third-party public transparency logs, allowing clients and researchers to verify the authenticity of the code that is executed.
  4. Secure software: Within the TEE, the software stack is strengthened with multiple layers of protection, and applications are containerized to narrow the attack surface and strictly control routes for data leakage.
  5. Anonymous routing: The system uses an anonymous credential service and an HTTP method that hides traces. Routing is done through third-party relay proxy servers, hiding users’ IP addresses from Meta and de-identifying users to prevent attackers from targeting specific users’ data.
  6. Short-lived and stateless data processing: The coordinator and predictor in the system are designed to be stateless. After processing a request and returning the results, it discards the conversation data in memory, ensuring there is no access to historical records.

External concerns: What if Meta AI is involved in abnormal deaths?

According to a report by 《BBC》, most AI companies currently store the usage data of chatbots and use it to train future product models. WhatsApp executive Will Cathcart explained that the technology behind WhatsApp’s incognito mode differs from the end-to-end encryption used to protect other messages, but both have the same level of effectiveness.

Surrey University cybersecurity expert Alan Woodward also pointed out that the risks to WhatsApp’s existing security from introducing a second system are very low.

However, external concerns remain that incognito mode could conceal AI failures or misuse. For example, multiple AI companies—including OpenAI and Google—have previously faced lawsuits related to abnormal deaths.

Woodward believes this could lead to AI responses lacking an accountability mechanism, because auto-disappearing messages can’t be retrieved by users or Meta. That means if someone’s conversation leads to harm or death, it may be impossible to find the relevant evidence.

In response, Cathcart said that Meta AI’s incognito chat mode in the initial phase will only process text and will not support images. At the same time, Meta AI’s safety protection mechanisms will be conservative and will refuse to answer requests that could be interpreted as harmful or illegal.

In addition, WhatsApp has already blocked other AI chatbots from accessing its system, so among the AI that hundreds of millions of users on the platform can interact with, only Meta’s own products are available.

Further reading:
Train AI with employees! Meta rolls out internal tracking tools—every mouse click and keystroke is recorded. Meta doubles down on AI: Zuckerberg writes code with Claude, and employees launch a Token consumption battle to push KPIs

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned