You and AI: How Artificial Intelligence can impact personal finance decisions

Big data has always had a place in financial systems, now new technology allows machines to explain what the numbers mean better than ever before. But how much trust would you have in a machine making a recommendation with your money?

NBCUniversal Media, LLC

A new revolution is underway in the financial space, powered by artificial intelligence. For the last few decades, computers have already changed how people save, invest and manage money. Today, smarter and more powerful machines are being fed those decades of past financial data, arming humans to make better predictions about the future.

NBC 5 spoke with experts in Chicago who are building this new world, hoping to learn more about the help and harm AI can bring.

“The world of financial services is like the perfect world for AI,” said Kris Hammond, director of the Center for Advancing Safety of Machine Intelligence at Northwestern University. "On one hand, we have a whole bunch of data that is numbers and symbols that we absolutely know what they mean. On the other hand, we have people who need to know information, but can't deal with those numbers and symbols."

AI advocates and optimists hope dealing with those numbers and symbols becomes easier with chatbots powered by large language models or LLMs.

One attempt to marry complex data to polished language can be found at Morningstar, the Chicago-based investment rating and research giant. A group at Morningstar built an AI-powered chatbot, named Mo. Released earlier this year to clients, Mo’s specialty is general financial queries.

Any user can “talk” to Mo by typing in questions and receive an answer in a chat format. But in the lobby of Morningstar’s downtown headquarters, Mo is paired with an avatar on a flat-screen TV, giving it the appearance of an animated human. Even Mo’s creators are surprised how far this work has come.

"If you'd asked me 10 years ago, would we have had a virtual avatar on our lobby answering investor questions, I would have said 'no,'” said Lee Davidson, chief analytics officer at Morningstar.

Mo’s responses to many financial questions can be verbose, spitting out long, coherent answers to questions like, “what are the benefits of stocks versus bonds?” or “what’s the pessimistic case for Microsoft?”

He won’t touch other topics, though, like “what stock should I buy today?”

To this question, Mo replied, “I’m sorry, but I can’t provide personalized investment advice.”

Davidson said Mo has been tested for five years. In that time, it’s been given lots of guardrails to ensure it sticks to topics it knows a lot about and won’t sway users to making ill-informed choices. To do this, Mo has been fed tens of thousands of editorial investing articles and investment data over the last 30 years.

Experts like Hammond say a crucial part of AI being able to help, and not harm, humanity is ensuring the data fed into these machines is correct and ethically sound. In other words, a smooth explanation of flawed or biased data is a problem – no matter how slick or coherent the presentation.

“These systems are great with language, but that doesn't mean they understand what they're saying,” said Hammond.

Hammond wants everyone to start having a very critical eye on information produced with the help of AI.

“We now have systems that we can interact with fairly and freely,” said Hammond. “As long as we focus on not just that the language is pretty, but these things are right, then we are building a set of partners for us, partners that can stay with us and help advise us based upon an understanding of what's happening in the world.”

Hammond said with the right processes in place, we can learn to trust machines the same ways we learn to trust humans: through time, experience or logical explanation.

“If somebody gives me advice, and I don't know them, I will take that advice with a grain of salt,” Hammond said. “If they give me advice every day and every single day they're right for a year, I'll start believing them. If they also give me advice and they say here's why I think this and it always makes sense, then I'll also be more inclined to follow their advice.”

Davidson said he understands why people may not be inclined to take financial advice from a machine, but he also knows there are people more comfortable with taking the risk.

“What's your risk appetite to accept these recommendations?” said Davidson. “If you're watching a movie and Netflix recommends you a new movie, you could probably be fine. Maybe [you] get a bad recommendation now and again, [but it is] low stakes. … What's your risk appetite when it comes to investing? Everyone’s got a different risk appetite."

For those perhaps with a larger risk appetite, Spanish stock-picking company Danelfin provides users a score from one to ten on the likelihood a stock or exchange traded fund will beat the market in three months time.

Danelfin -- named after Isaac Asimov’s literary robot R. Daneel Olivaw – tracks thousands of stocks on American and European exchanges, updating every day, from its nine-person office in Barcelona.

“In terms of analysis, nothing can beat a machine,” said Founder and CEO Tomas Diago. “Our solution is built to merge humans and machines.”

Diago said his company leverages one of machine learning’s strengths – performing repetitive tasks – to unlock insight humans would not have time to notice.

“We transform 900 indicators per day into 10,000 stock features per day,” said Diago. “All of these features, these analyses, is formed into a probability to beat the market in three months.”

Davidson, Diago and Hammond all described a scenario where human understanding is enhanced, not totally replaced, by AI. In the short term, an optimist's view of AI could paint AI as a complementary tool or partner to achieve more human goals, as long as humans are in control with a critical eye on the data.

“I think right now the danger that consumers have is interacting with a system like Chat GPT and everything that can be built on top of it.,” said Hammond. “The language is so clear and clean and fluid and marvelous that we get confused. And we think it's doing something that it's not.”

Davidson said because financial decisions are high-stakes tasks, humans will always want an enhanced level of control while working with AI.

“I don't see a time where it's completely automated,” Davidson said. “That doesn't mean it's not possible. I definitely think it's possible, but I just don't know if that's desirable.”

Contact Us