The Hidden Privacy Cost of Viral AI Trends

Published by The Star and New Straits Times on 07 Feb 2026

As a society, we are currently grappling with a profound sense of violation. Recent global reports surrounding certain generative AI platforms, highlighting their capacity to generate non-consensual, sexually explicit deepfakes of women and children, have rightly sparked widespread outrage. It forces us to confront a reality many find difficult to process: the troubling potential for automated exploitation.

The strong global reaction to these non-consensual deepfakes—a clear violation of human dignity and online safety—stems from a collective understanding that our image, our body, and our identity are intrinsically our own.

Yet, almost simultaneously, we witness a jarring paradox. While we recoil from the potential theft and misuse of our digital identity, we often voluntarily surrender intimate details for the sake of a viral trend.

This is evident in phenomena like recent AI caricature trends, where users upload selfies and provide detailed personal prompts—or simply instruct the AI to generate portraits based on ‘everything it knows.’ Whether actively describing their jobs and home environments or passively granting permission to scour their cumulative chat history, the result is the same. Users are allowing the AI to aggregate scattered data points into a cohesive, high-resolution psychographic profile linked to their biometric data.

This cognitive dissonance is alarming. On one hand, there is a global call for stricter measures against AI misuse. On the other, we treat our sensitive personal data as currency to purchase a fleeting moment of social media engagement.

From a legal and data privacy perspective, this normalization of “data surrender” carries inherent risks. When individuals participate in these trends, they are not merely “playing” with AI; they are actively training it. Algorithms learn to recognise faces, understand contexts, and map lives with increasing precision. Every piece of data fed into these models contributes to a digital profile that renders individuals increasingly identifiable and vulnerable to targeting.

The implications for the vulnerable—particularly children—are profound. While children cannot legally provide consent, the long-term privacy implications of their digital footprints, established by well-meaning adults uploading their images for AI-generated content, are significant. Such actions contribute to an ever-expanding digital dossier for a child, established without their future agency or understanding.

This is not to suggest that technology is inherently malicious, nor that progress should be halted. Innovation offers immense benefits and is crucial for societal advancement. However, it is imperative to critically assess the terms of our engagement with these powerful tools.

We cannot effectively advocate for robust protections against the non-consensual weaponization of AI if we simultaneously cultivate a culture of uncritical over-sharing. Responsible digital citizenship requires a clear understanding that privacy is not merely a passive right to be enforced, but an active discipline that individuals must exercise.

To foster a digital ecosystem that genuinely respects human dignity and drives
responsible innovation, we must shift our collective mindset. We must recognise that in the age of AI, our identity—our face, our history, our context—is our most valuable asset. Protecting it demands not just robust legal frameworks against exploitation, but also a conscious cultivation of data hygiene and digital discernment.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles