
Someone talks on their cellphone in Ottawa on July 18, 2022. (Photo: Sean Kilpatrick / The Canadian Press)
A seemingly lighthearted TikTok video shows a Bigfoot in a cowboy hat, draped in an American flag vest, cheerfully announcing plans to visit an LGBT parade. Within moments, the tone shifts — Bigfoot drives into a crowd waving rainbow flags, creating a scene of violence. In June, the clip attracted over 360,000 views and a flood of supportive comments.
Such disturbing AI-generated videos are becoming common, targeting LGBTQ2S+, Jewish, Muslim, South Asian, and other minority groups. The wave of content has sparked alarm among Canadian advocates, who say current regulations fail to address the speed and reach of these digital attacks.
Rising Hate and Weak Digital Defences
Helen Kennedy, head of LGBTQ2S+ advocacy group Egale Canada, warns that generative AI is being used to undermine and dehumanize gender-diverse people. She calls the existing safety laws outdated and inadequate. From deepfake clips to algorithm-driven promotion of hate, the harm is both immediate and real, she says.
Evan Balgord of the Canadian Anti-Hate Network adds that this trend extends beyond LGBTQ2S+ groups, fueling Islamophobia, antisemitism, and anti-South Asian rhetoric. He fears that the normalization of such violent content online increases the likelihood of real-world attacks.
No Rules for Social Media Giants
Experts say Canada’s digital safety laws were already behind the times — and AI has widened the gap. Legal scholar Andrea Slane notes there are no enforceable safety standards for social media platforms, nor any way to hold them accountable.
Parliament’s previous attempt to regulate harmful online content failed when the session was prorogued in January. Justice Minister Sean Fraser has promised a fresh review of the Online Harms Act, but no decision has been made on whether to rewrite or revive the bill.
Government Weighs Regulation
The newly formed Ministry of Artificial Intelligence and Digital Innovation says it is studying the misuse of AI tools, focusing on content that targets vulnerable groups. Spokesperson Sofia Ouslis acknowledges existing protections were not designed for generative AI and emphasizes the need for stronger safeguards.
Ottawa is also considering making the distribution of non-consensual sexual deepfakes a criminal offence, drawing on lessons from the European Union and UK, both of which are ahead in AI regulation.
Accessible Tools, Growing Risk
Peter Lewis, Canada Research Chair in trustworthy AI, says high-quality videos can now be made easily with low-cost or free tools. While text-based AI systems have added filters, video platforms lack similar safeguards. Lewis argues for better detection, swift removal systems, and cooperation between governments, platforms, and developers.
Yet, he cautions, AI detection tools won’t catch everything — and without rapid flagging and removal, hateful videos will continue to spread.

