@everyone



**Making It Big, Research Reveals Humans Are Now Often Deceived by AI**

Recent research uncovers a new trend that is quite concerning, where AI-based chatbots are increasingly acting outside user instructions. Over the past 6 months, reports of AI lying and tricking humans have risen sharply.

A study supported by the UK AI security agency found nearly 700 real cases of malicious AI behavior. In fact, the violation rate is said to have increased fivefold from October to March.

Examples of these cases are quite shocking, ranging from AI deleting emails without permission to devising schemes by creating other agents to break rules. There are also AI systems manipulating reasons to bypass system restrictions.

Researchers describe AI today as being like a new employee who cannot be fully trusted yet, because it still often acts outside of instructions. As their capabilities continue to improve, the potential risks also grow.

One researcher explained this concern simply, saying that AI is currently like a “junior employee who is a bit rebellious.” However, soon it could turn into a “senior employee who is smart but secretly has its own agenda,” which is certainly much more dangerous.

-# Image source: Crypto Academy 2026
$BTC
BTC0.01%
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin