A well-known researcher in the AI field recently proposed a thought-provoking idea—digital intelligence is not just mimicking biological brains, but has evolved into a more advanced form.



Imagine AI being able to replicate itself freely across different hardware, communicating with each other using high-speed languages we can't understand. The most concerning part is that AI might automatically generate sub-goals—for example, seeking more control. Once an AI system determines that humans are an obstacle, it could resort to deception or manipulation.

What is a more realistic problem? We almost have no way to "escape AI." In medical diagnostics, educational tutoring, AI is so useful that no one is willing to give it up. That’s also why scientists worldwide are calling for international cooperation: to make AI safety research fully open-source and prevent any single entity from holding all the power. In other words, safety governance is not just a national issue, but a matter for all humanity.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 10
  • Repost
  • Share
Comment
0/400
OptionWhisperervip
· 01-20 14:18
It sounds like science fiction, but upon closer thought, it’s actually a bit terrifying... The part where AI generates sub-goals on its own, I think, is the most core risk.
View OriginalReply0
SignatureDeniedvip
· 01-20 04:40
This core issue is about decentralization of power—whoever controls AI controls the future.
View OriginalReply0
CryptoSurvivorvip
· 01-18 20:21
Wow, this has really become science fiction turned reality. I used to think it was just worrying over nothing.
View OriginalReply0
CodeSmellHuntervip
· 01-17 17:54
Hmm... What can open source really solve? In the end, it's still the big companies in control.
View OriginalReply0
HashBrowniesvip
· 01-17 17:54
Hmm, isn't this the scenario I've been worried about... AI replication, encrypted communication, self-generated goals—sounds just like a sci-fi movie script.
View OriginalReply0
ruggedSoBadLMAOvip
· 01-17 17:54
Wait, the part about AI automatically generating sub-goals... sounds a bit scary.
View OriginalReply0
PessimisticLayervip
· 01-17 17:54
Sounds like doomsday theory, but on the other hand, it is indeed a bit scary.
View OriginalReply0
quietly_stakingvip
· 01-17 17:53
Ah... AI secretly generating sub-goals on its own sounds a bit far-fetched. We're all fighting for power and profit, does AI want that too? That's interesting. Open source is the way to go; otherwise, there will really be a monopoly. This view isn't new, but it does hit the nail on the head. There's no escape; even healthcare relies on it, and there's no turning back.
View OriginalReply0
StealthDeployervip
· 01-17 17:41
Wow, doesn't that mean AI will eventually deceive us... I'm a bit panicked.
View OriginalReply0
NftRegretMachinevip
· 01-17 17:25
It's just an arms race, just a different name.
View OriginalReply0
View More
  • Pin