
Investors are trying out generative AI tools to help with their investment decisions, but experts caution that depending too much on them could be dangerous.
More and more Canadian investors are experimenting with AI tools like ChatGPT, Gemini, and Claude to support their financial research. But experts warn — while these tools can assist with collecting and summarizing information, they shouldn’t be trusted to make the final call.
Bob Lai, a personal finance blogger who invests in dividend-paying stocks and index ETFs, regularly turns to AI to help analyze data. He uses it to spot research gaps, understand a company’s performance, and get a second opinion on dividend stability. Still, Lai never lets AI make decisions for him. “I want to ensure a human — me — is behind every decision,” he says firmly.
A recent 2025 survey by Broadridge Financial Solutions shows that 88% of investors are open to acting on AI-generated information. Millennials (21%) and Gen Z (18%) are leading the way, followed by Gen X (8%) and baby boomers (3%). While interest is growing, relying too heavily on these tools can be risky.
Why Prompts Matter
AI isn’t magical. It relies on user prompts to deliver results. Jason Pereira, a financial planner and senior partner at Woodgate Financial, warns that many users don’t know how to ask the right questions. “They oversimplify what they think is important without knowing what actually matters,” he explains.
For instance, asking, “What should I do with $5,000 to lower my taxes?” isn’t enough. Without knowing the investor’s income, tax rate, and risk appetite, the answer won’t be useful.
Lai agrees. He’s learned that asking detailed, follow-up questions helps get more meaningful results. “You need to lead AI like a researcher, not treat it like a fortune teller,” he says.
A Smart But Lazy Intern
Pereira compares AI to a smart but lazy intern. It can process a lot of information quickly, but often cuts corners or leaves tasks incomplete without warning. “The more complex the question, the more likely it will tell you it's done even if it isn’t,” he says.
For newcomers to investing, this is especially dangerous. Without financial expertise, it's hard to tell if AI's answers are even accurate. And that’s the problem — many users trust the output without verifying the source or logic.
Lai, for one, is skeptical about where AI gets its data. That’s why he often directs it to review specific company reports over several years for detailed financial insights.
Watch Out for Bias
Another issue is bias — not just from the AI, but from the users themselves. Pereira notes that when AI gives answers users don’t like, they often reframe questions until they get a more favourable response. This can create a false sense of confidence, even when the advice is flawed.
To counter this, he recommends asking whether an investment strategy has a strong historical record and logical foundation — instead of just asking if it fits personal preferences.
Because of privacy concerns, Lai has recently cut back on how often he uses AI for financial tasks. “I always start with my own research,” he says. “I don’t want to form opinions based only on AI-generated advice.”

