Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
There is an interesting debate happening around AI security. Experts seem to have differing opinions on the risks of open-source AI tools being misused.
Some security professionals strongly warn about the potential dangers of OSS. They point out that malicious users could illegally utilize these tools, and that’s where the danger of AI lies. However, here’s the interesting part: when looking at actual data, the story appears somewhat different.
Many researchers highlight that, in reality, most of the AI risks are tied to proprietary systems from major companies like OpenAI and Claude. In other words, the problem isn’t solely about open-source. Furthermore, biosecurity experts have also entered the discussion, arguing that software and sequencing technologies are not the true limiting factors.
In summary, focusing only on open-source when discussing AI risks might be a one-sided view. It’s important to take a more measured look at where the real threats actually lie.