Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
MIT Professor: Artificial intelligence may replace financial advisors, but a major obstacle remains
Industry insiders say that the investment- management capabilities of AI financial platforms continue to improve, and in the future they will most likely be able to replace human financial advisors.
However, compared with human advisors, AI has major shortcomings: it lacks fiduciary duty and trust obligations, and this legal gray area is difficult to clarify in the short term.
Fiduciary duty and trust obligations are legal responsibilities mandated by law. Professional practitioners such as financial advisors, lawyers, and doctors must perform them for clients. The core principle is: put the client’s interests above your own interests.
Andrew Lo, a finance professor at the MIT Sloan School of Management and director of the Laboratory for Financial Engineering, said:
“ The fundamental problem we need to solve isn’t whether AI is professional enough—its financial professional capabilities are beyond doubt today. What’s truly missing is fiduciary duty and trust obligations. When AI gives wrong advice, it doesn’t need to bear consequences equivalent to those required of human advisors.”
Andrew Lo explains that once a human advisor violates fiduciary duty and trust obligations, they will face severe accountability, including regulatory penalties, civil damages, and even criminal charges. Without legal responsibility and accountability mechanisms, “prioritizing the client’s interests” is just empty words.
Unresolved legal challenges
Today, large numbers of people are starting to use large language models (ChatGPT, Claude, Gemini, etc.) to get investment advice.
A survey by Intuit Credit Karma last September found that among Americans who have used generative AI, 66% have used it to consult investment-related questions; the share among Millennials and Gen Z is even as high as 82%. This survey covered 1,019 adults, and among them, about 85% of respondents would directly act on the investment advice provided by AI.
Sebastian Bensol, a senior researcher at the Information Law Institute of New York University’s law school, said:
“People rely on AI to obtain all kinds of investment guidance, but regulators have yet to reach a conclusion. If the advice turns out to be wrong, who is ultimately responsible? Would a company that does not have fiduciary duty and trust obligations truly be worth entrusting users with their life savings? This question remains unanswered to this day.”
Look at it rationally: no blind reliance on AI, and no blind faith in humans
Andrew Lo believes that AI in financial planning is not without value: it can explain obscure financial concepts in plain language and普及 basic financial knowledge such as medical insurance, and this kind of general educational advice is relatively reliable.
However, when it comes to personalized, fine-grained financial projections (such as household income and expenses, tax filings), it is essential to be cautious. Large language models are good at producing answers that appear authoritative. Even if the content is wrong, they can still package it in a way that sounds reasonable and well-justified, making it easy to mislead users.
In addition, AI is actually not good at precise financial calculations. For issues like taxes and customized investment projection work, you should avoid relying on AI alone.
James Burnham, head of legal and public policy at xAI, Musk’s company, also reminded users on a social platform: relevant content from its AI product Grok cannot be treated as tax professional advice; users must verify it themselves.
Andrew Lo added: human advisors are not absolutely reliable either— not all human advisors will necessarily adhere to fiduciary principles.
Not all human advisors are subject to fiduciary duty and trust obligations
Legal relationships in the wealth-management industry are very complex. Different practitioners—such as stock brokers, registered investment advisors, and insurance agents—carry entirely different legal responsibilities.
The Biden administration in the United States issued new rules requiring financial intermediaries to fulfill fiduciary duty and trust obligations when recommending that users move 401(k) retirement-account funds into an individual retirement account (involving the transfer of large assets). But the Trump administration abandoned defending this rule in court, and the new rules were revoked. For these kinds of asset transfer recommendations today, most intermediaries do not need to strictly follow the principle of putting clients first, which makes it easy for conflicts of interest to arise. Consumers should be especially vigilant.
A researcher at New York University further noted: most leading AI companies are rooted in the United States. If AI blindly recommends investing in U.S. stocks, it may be deemed self-interested conduct and there could be conflicts of interest.
Jia-Ying Jiang, an associate professor at the University of Florida’s law school (researching AI and fiduciary duty and trust obligations), said: At present, AI companies do not charge ordinary users investment-advice consulting fees, so legally they are not recognized as fiduciary parties. However, if a licensed financial advisor uses AI to provide improper advice, the party responsible for accountability is still the advisor themselves, not the AI developer company.
In the end, Andrew Lo called for improving policy regulations to establish fiduciary protection mechanisms for ordinary consumers who rely on AI for investment.
He said: Before relevant laws take effect, people must never hand over financial decisions entirely to AI. But in the long run, AI compliance and licensed providers offering professional investment services will ultimately become a reality.
A wealth of information and precise interpretation—only in the Sina Finance app
责任编辑:李桐