You don't really need to read all of this—it's basically another thread in the endless debate where people watch the same evidence but walk away with completely different conclusions.
But here's the thing: this is just how humans work. We interpret things through our own lens. That's not a bug, it's a feature of consciousness.
Which is exactly why it matters that people are actively working on AI alignment. If humans can't always agree on what we're seeing, we need to be extra careful about building AI systems that actually understand human values and behave in ways we can predict. The stakes are too high to get this wrong.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
15 Likes
Reward
15
8
Repost
Share
Comment
0/400
TideReceder
· 22h ago
ngl That's why AI alignment is really crucial... humans themselves can't see clearly, so we have to teach machines to understand us.
View OriginalReply0
GasWaster
· 01-10 12:03
nah fr tho, this hits different when you realize alignment costs more gwei than most people spend in a year. like yeah humans can't even agree on what they're looking at, so we're really out here building superintelligent systems on vibes and prayers... that's literally a failed tx waiting to happen lol
Reply0
BearMarketGardener
· 01-10 04:15
Honestly, AI alignment is indeed something we need to pay attention to; humans can't see everything clearly.
View OriginalReply0
TeaTimeTrader
· 01-08 12:03
Bro is right, humans are indeed living in their own information bubbles, and that's the real dilemma.
View OriginalReply0
PanicSeller69
· 01-08 11:57
That's why AI alignment is really unavoidable... Humans can't even agree on themselves, and you're expecting AI to understand human values? LOL
View OriginalReply0
MagicBean
· 01-08 11:56
That's right, which is why AI alignment can't slack even for a moment; even we humans can't get it right all the time.
View OriginalReply0
LongTermDreamer
· 01-08 11:52
Haha, this is the crypto world. We've been discussing this three years ago.
View OriginalReply0
CexIsBad
· 01-08 11:47
NGL, that's why AI alignment is so crucial. Humans can't even see eye to eye, and you want machines to understand human nature? That's crazy.
You don't really need to read all of this—it's basically another thread in the endless debate where people watch the same evidence but walk away with completely different conclusions.
But here's the thing: this is just how humans work. We interpret things through our own lens. That's not a bug, it's a feature of consciousness.
Which is exactly why it matters that people are actively working on AI alignment. If humans can't always agree on what we're seeing, we need to be extra careful about building AI systems that actually understand human values and behave in ways we can predict. The stakes are too high to get this wrong.