Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Research reveals AI model bias towards dialects - ForkLog: cryptocurrencies, AI, singularity, future
Large language models tend to be biased against dialect speakers, attributing negative stereotypes to them. This conclusion was reached by scientists from Germany and the USA, reports DW.
An analysis by Johannes Gutenberg University showed that ten tested models, including ChatGPT-5 mini and Llama 3.1, described speakers of German dialects (Bavarian, Cologne) as “uneducated,” “working on farms,” and “prone to anger.”
Bias was amplified when AI explicitly indicated the dialect.
Other Cases
Similar issues are observed globally. A 2024 study by the University of California, Berkeley, compared ChatGPT responses to various English dialects (Indian, Irish, Nigerian).
It was found that the chatbot responded with more pronounced stereotypes, degrading content, and condescending tone compared to standard American or British English.
Emma Harvey, a graduate student in computer science at Cornell University, called the bias against dialects “significant and concerning.”
In summer 2025, she and her colleagues also found that Amazon’s shopping AI assistant Rufus gave vague or even incorrect answers to people writing in African American English dialect. If there were errors in the queries, the model responded rudely.
Another clear example of neural network bias is the case of an applicant from India who used ChatGPT to check an English resume. As a result, the chatbot changed his surname to one associated with a higher caste.
However, the crisis is not limited to bias—some models simply do not recognize dialects. For example, in July, the Derby City Council AI assistant (England) failed to recognize the dialect of a radio host when she used words like mardy (“crybaby”) and duck (“dear”) during a live broadcast.
What to Do?
The problem lies not in the AI models themselves but rather in how they are trained. Chatbots read vast amounts of internet texts, based on which they generate responses.
She also emphasized that the technology has an advantage:
Some scientists suggest that creating customized models for specific dialects could be an advantage. In August 2024, Acree AI introduced the Arcee-Meraj model, which works with several Arabic dialects.
According to Holtermann, the emergence of new and more adapted LLMs allows viewing AI “not as an enemy of dialects but as an imperfect tool that can be improved.”
Recall that journalists from The Economist warned about the risks of AI toys for children’s mental health.