google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

AI has a big problem when it comes to financial advice: MIT professor

Instagram_photos | Istock | Getty Images

According to financial experts, the financial capacity of artificial intelligence platforms is improving to the extent that they can replace human financial advisors in the future.

But AI has a major disadvantage over human advisors: a lack of fiduciary duty. And a solution to this legal gray area doesn’t seem imminent, they said.

Fiduciary duty is a legal obligation that many financial advisors and other professionals in other fields, such as lawyers and doctors, owe to their clients. This essentially means that they will put their clients’ interests ahead of their own.

“The problem we need to solve is not whether AI has enough expertise,” said Andrew Lo, professor of finance at MIT Sloan School of Business and director of the Financial Engineering Lab. “The answer right now is clearly: Artificial intelligence, [financial] expertise.”

“What they don’t have is fiduciary duty,” Lo said. “They don’t have the ability to deal with the consequences if they make mistakes as much as a human advisor does.”

Lo said an advisor who breaches his or her fiduciary duty could be subject to quite serious consequences, including regulatory penalties, civil liabilities and criminal charges.

The idea of ​​putting a client’s interests ahead of yours, with no liability or legal liability, “has no teeth,” he said.

An ‘unresolved’ legal question

Read more CNBC personal finance coverage

According to the survey of 1,019 adults, approximately 85% of respondents who used GenAI for financial advice acted on the advice given.

“People look to these services for all kinds of advice and they get it, and that seems like a big open question from a regulatory perspective,” said Sebastian Benthall, a senior research fellow at the Information Law Institute at New York University School of Law.

“Who is really responsible and can people really trust a product to do this if it is not backed by a company that has a fiduciary duty?” Benthall said. “It’s really unresolved.”

Why you shouldn’t blindly trust AI or humans?

However, there are some good use cases for AI in financial planning, Lo said.

Lo said AI is “really good” at providing online resources for a variety of financial concepts that typical humans don’t understand. For example, if someone is looking for answers to basic questions about Medicare, AI can generally provide a reliable overview, he said.

Lo said that while AI’s outputs are complex in many financial aspects, consumers generally should not blindly trust answers to questions about their own household’s finances.

“When it comes to very, very specific calculations regarding your personal situation, that’s where you need to be very, very careful,” he said. “One of the things I find particularly worrying about LLMs is that no matter what you ask him, he always comes back with an answer that sounds authoritative, even if it isn’t.”

In that sense, he said, double- and triple-checking the AI’s answers is “really necessary.”

Lo said, perhaps surprisingly, that AI is not strong at doing financial calculations; so any number-based financial planning question involving your taxes, for example, It’s generally best avoided.

If they make as many mistakes as a human advisor does, they do not have the ability to suffer the consequences.

Andrew Lo

professor of finance and director of the Financial Engineering Laboratory at MIT Sloan School of Management

James Burnham, legal and government affairs officer at Elon Musk’s xAI, said: social media post Grok, the company’s AI platform, said in March that it “does not have tax advice, so always confirm it yourself.”

Of course, many people give financial advisor advice to clients, and it is up to the client to decide whether to implement it.

“I guess that’s how I look at Masters: They can be very, very helpful in presenting different options and explaining how those options might work, but you always have to remember that the advice they might give you could be wrong,” Lo said.

“But I think this also applies to human financial advisors,” he said.

Not all human advisors are proxies

Sdi Productions | Istock | Getty Images

Not all human financial advisors are fiduciaries either.

The financial advice landscape is a minefield of different legal relationships. These legal duties may differ depending on factors such as whether the person the consumer is talking to is a stockbroker, registered investment advisor, insurance agent, or other intermediary.

For example, a U.S. Department of Labor rule issued during the Biden administration sought to give brokers a fiduciary duty to recommend transferring money from a 401(k) plan to a private retirement account; it was a move that could involve hundreds of thousands of dollars.

However, recently this rule He died after the Trump administration stopped defending it in court; This means that many financial intermediaries are not bound by a fiduciary duty to recommend transfer. As a result, legal experts recommend that consumers approach such transfer recommendations with caution due to the potential for conflicts of interest.

AI creates more jobs than kills, says Perella Weinberg's Walter Isaacson

New York University’s Benthall posited a similar legal quandary regarding AI recommendations: Because AI giants are now largely US-based, if an AI recommends investors invest their retirement savings in US stocks, that recommendation could be viewed for its own benefit or as a financial conflict of interest.

However, companies providing AI services do not receive compensation for the advice they give to retail investors and are therefore not fiduciaries, said Jiaying Jiang, an associate law professor at the University of Florida Levin College of Law who researches artificial intelligence and fiduciary duty.

Who’s really responsible, and can people really trust a product to do this if it’s not backed by a company with a fiduciary duty? It’s not really resolved.

Sebastian Benthall

Senior research associate at the Information Law Institute at New York University School of Law

But financial advisors who owe clients a duty of trust could violate that duty by using artificial intelligence, Jiang said.

For example, if an advisor uses AI to make a specific recommendation to a client, but that recommendation is not in the client’s interest, the advisor will be responsible, not the company supporting the AI ​​platform, Jiang said.

As a result, Lo said he thinks government policy needs to change to provide trust protections to consumers who receive financial advice from AI.

Until then, “we won’t get to the point where we can hand these things over completely.” [financial] decisions,” Lo said.

“But I believe it will happen eventually,” he said.

Select CNBC as your preferred source on Google and never miss a beat from the most trusted name in business news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button