
An example of the AI caricature trend (Source: Ctv news)
A viral social media trend that turns personal details into AI-generated caricatures is raising red flags among cybersecurity experts, who say the seemingly playful activity could come with hidden costs to privacy, consent and even the environment.
The trend encourages users to feed chatbots — including platforms such as ChatGPT — with information about their hobbies, personality and appearance to generate stylized digital portraits. While millions are embracing it as light entertainment, experts caution that the data shared in the process may have long-term consequences.
‘An Attention Trap’ Disguised as Fun
Claudiu Popa, CEO of Data Risk Canada and a certified cybersecurity and privacy specialist, says the trend is a classic example of how gamified technology nudges users into revealing more about themselves than they realize.
According to Popa, AI systems are designed to keep asking for additional inputs — photos, email addresses and personal preferences — to improve results. That repeated prompting, he argues, turns a harmless activity into what he calls an “attention trap,” particularly for younger users eager to participate in viral moments.
At the heart of the concern is consent. Many users, he says, are unknowingly contributing to their own data exposure by voluntarily handing over personal information without understanding how it may be stored, processed or reused.
The Data Economy Behind the Trend
Privacy advocates warn that the information shared through such tools can feed the larger data economy. Personal details collected during these interactions may help build predictive profiles used for targeted advertising and behavioural analysis.
Popa notes that these systems benefit for-profit ecosystems, including data brokers, while users receive little clarity about how their information is ultimately handled.
He adds that the trend offers a real-world teaching moment for parents and educators trying to explain to children how digital platforms condition people to share increasingly personal details online.
Risks of ‘Agentic’ AI Access
The concerns extend beyond caricature generators to the rapid rise of so-called agentic AI — tools that can be granted access to emails, financial platforms or other sensitive systems.
Popa warns that allowing such access could shift responsibility away from service providers and onto users. If fraud occurs after an AI tool is given banking credentials, he says, consumers may find themselves outside the protection of standard financial terms and conditions because they authorized the access themselves.
His advice is blunt: never grant invasive permissions to AI systems that do not require them.
Environmental Cost Often Overlooked
Beyond privacy, the expert points to the growing environmental footprint of artificial intelligence. Large data centres that power AI models require vast amounts of water and energy, a demand that in some regions is competing with local community resources.
Viral trends that drive mass usage, he argues, indirectly add to that burden.
A Call for Digital Awareness
Popa stresses that the issue is not about avoiding AI altogether but about using it with awareness and restraint. Users should question why a tool needs certain information, limit what they share and understand the trade-off between convenience and control.
What looks like a harmless digital craze, he says, is also a reminder of how quickly entertainment can blur into data collection — and how important it is for users to recognize the boundary.

