[Feature Article] Navigating ONSA Through Safety by Design

NAVIGATING ONSA THROUGH SAFETY BY DESIGN

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

The recent US$375mil verdict against Meta in a New Mexico court represents a watershed moment in digital governance. While the staggering financial penalty has dominated headlines, the true significance lies in the legal precedent it establishes for corporate risk and product liability in the tech sector.

Crucially, the jury did not penalise the platform merely for a failure in content moderation. The liability was rooted in the finding that the platform’s core recommendation algorithms actively steered underage users towards harmful material, violating unfair practices laws. This verdict effectively signals the death knell for the industry’s legacy playbook of reactive content moderation.

For multinational tech companies operating in Malaysia, this global legal shift arrives at a critical juncture. Under our Online Safety Act 2025 (ONSA), tech executives face personal liability for platform failures. However, the legislation provides a crucial defence clause, allowing leadership to avoid liability if they can demonstrate they took “reasonable steps” to prevent the offence.

The New Mexico verdict serves as a stark warning on how courts and regulators will interpret this threshold moving forward. Relying on after-the-fact measures, such as launching new parental controls or relying on human moderators only after a crisis has occurred, is no longer a viable legal strategy. As public scrutiny intensifies, this landmark verdict demonstrates that relying on reactive fixes is an increasingly perilous legal position when the underlying product design remains fundamentally flawed.

Instead of viewing legislation like ONSA as a hostile threat, the tech industry must embrace “safety by design” as its ultimate corporate shield. Implementing mandatory Algorithmic Impact Assessments before launching new features is no longer just red tape. It is the most effective way to transform unpredictable litigation risks into a predictable, manageable compliance framework.

By building architectural safety measures into their code from the outset, platforms provide a clear, auditable trail of these “reasonable steps”, thereby protecting their executives and ensuring regulatory certainty. Beyond mere legal compliance, there is a profound governance and reputational imperative. Tech giants play an undeniable role in shaping society, and the loss of parental trust is a devastating blow to long-term brand equity.

Ensuring the safety of children and making parents feel secure that their families are protected online is not just a moral obligation. It is foundational to maintaining a platform’s social license to operate.

Ultimately, robust digital governance is a competitive advantage. By proactively pivoting from reactive moderation to structural safety by design, tech platforms can simultaneously protect their leadership under ONSA, fulfill their societal responsibilities, and secure the enduring trust of their user base.

Just as we require safety certifications for physical infrastructure, we must now demand Algorithmic Impact Assessments from our digital landlords. The message is unequivocal: the future belongs to these algorithmic platforms, but their deployment requires a social license to operate.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star: The End of Digital Exceptionalism

THE END OF DIGITAL EXCEPTIONALISM

Published by The Star on 04 Mar 2026

The recent declaration by the government that overseas tech executives could face legal action under the new online safety law has predictably sparked dramatic headlines. The imagery of tech billionaires answering to a Malaysian court is certainly compelling political theatre. However, this spectacle risks obscuring the profound structural realignment actually taking place within our digital borders.

Malaysia is not acting as a rogue regulator; we are merely waking up to a hardened global reality. For too long, multinational platforms operated under a doctrine of digital exceptionalism, treating foreign jurisdictions as lucrative revenue streams free from sovereign oversight.

But with the introduction of frameworks like the UK’s Online Safety Act, and the watershed arrest of Telegram’s CEO in France, the illusion of Silicon Valley immunity has permanently shattered. We are witnessing the global collision between the “move fast and break things” ethos and the sovereign duty of nations to protect their citizens.

Beyond the headline-grabbing prospect of charging foreign executives, the operational spine of the Online Safety Act 2025 (ONSA) is far more pragmatic: the mandatory appointment of a local representative.

This provision bridges a critical jurisdictional gap. Where regulators previously grappled with the friction of enforcing domestic laws against entities domiciled abroad, a local presence ensures that accountability is no longer remote or theoretical, but actionable within our own courts.

Yet, the ultimate success of this framework hinges on a critical legal caveat. Executives can avoid liability if they demonstrate the offence occurred without their consent and that they took “reasonable steps” to prevent it. How our courts and regulators define this threshold will be the defining legal battleground of the next decade.

This is where the intersection of law and generative AI becomes inherently perilous. Consider the controversy where X (formerly Twitter) permitted its Grok AI to generate and manipulate user images without robust, market-ready guardrails.

If a platform deliberately designs and deploys a tool that inherently bypasses consent and facilitates the creation of explicit material, can its leadership legitimately claim they took “reasonable steps” to protect the public?

Relying on after-the-fact user reporting for foreseeable harms is no longer an acceptable defence; it is an abdication of duty.

For global tech entities, this legislation should not be viewed as a death knell for innovation, but as a demand for regulatory certainty. To maintain market access in Malaysia, platforms must pivot from relying on flawed, reactive content moderation to a proactive “safety by design” framework.

Just as we require safety certifications for physical infrastructure, we must now demand Algorithmic Impact Assessments from our digital landlords. The message is unequivocal: the future belongs to digital innovation, but that innovation requires a local license to operate.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star & New Straits Times Newspaper: The Hidden Privacy Cost of Viral AI Trends

The Hidden Privacy Cost of Viral AI Trends

Published by The Star and New Straits Times on 07 Feb 2026

As a society, we are currently grappling with a profound sense of violation. Recent global reports surrounding certain generative AI platforms, highlighting their capacity to generate non-consensual, sexually explicit deepfakes of women and children, have rightly sparked widespread outrage. It forces us to confront a reality many find difficult to process: the troubling potential for automated exploitation.

The strong global reaction to these non-consensual deepfakes—a clear violation of human dignity and online safety—stems from a collective understanding that our image, our body, and our identity are intrinsically our own.

Yet, almost simultaneously, we witness a jarring paradox. While we recoil from the potential theft and misuse of our digital identity, we often voluntarily surrender intimate details for the sake of a viral trend.

This is evident in phenomena like recent AI caricature trends, where users upload selfies and provide detailed personal prompts—or simply instruct the AI to generate portraits based on ‘everything it knows.’ Whether actively describing their jobs and home environments or passively granting permission to scour their cumulative chat history, the result is the same. Users are allowing the AI to aggregate scattered data points into a cohesive, high-resolution psychographic profile linked to their biometric data.

This cognitive dissonance is alarming. On one hand, there is a global call for stricter measures against AI misuse. On the other, we treat our sensitive personal data as currency to purchase a fleeting moment of social media engagement.

From a legal and data privacy perspective, this normalization of “data surrender” carries inherent risks. When individuals participate in these trends, they are not merely “playing” with AI; they are actively training it. Algorithms learn to recognise faces, understand contexts, and map lives with increasing precision. Every piece of data fed into these models contributes to a digital profile that renders individuals increasingly identifiable and vulnerable to targeting.

The implications for the vulnerable—particularly children—are profound. While children cannot legally provide consent, the long-term privacy implications of their digital footprints, established by well-meaning adults uploading their images for AI-generated content, are significant. Such actions contribute to an ever-expanding digital dossier for a child, established without their future agency or understanding.

This is not to suggest that technology is inherently malicious, nor that progress should be halted. Innovation offers immense benefits and is crucial for societal advancement. However, it is imperative to critically assess the terms of our engagement with these powerful tools.

We cannot effectively advocate for robust protections against the non-consensual weaponization of AI if we simultaneously cultivate a culture of uncritical over-sharing. Responsible digital citizenship requires a clear understanding that privacy is not merely a passive right to be enforced, but an active discipline that individuals must exercise.

To foster a digital ecosystem that genuinely respects human dignity and drives
responsible innovation, we must shift our collective mindset. We must recognise that in the age of AI, our identity—our face, our history, our context—is our most valuable asset. Protecting it demands not just robust legal frameworks against exploitation, but also a conscious cultivation of data hygiene and digital discernment.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Al Bill to Iron Out Usage

Al Bill to Iron Out Usage

Published by The Star on 29 Jan 2026

PETALING JAYA: The Artificial Intelligence (AI) Governance Bill is a necessary and timely step toward responsible AI deployment in Malaysia, which demonstrates that clearer laws give confidence and certainty to investors, developers, as more users adopt AI in their daily lives, say experts on the matter.

Lawyer Thulasy Suppiah, who specialises in cybersecurity, AI, data centres and emerging technologies, said that clear rules can help reduce regulatory ambiguity, allowing companies to design, deploy and invest in AI without fear of sudden bans, inconsistent enforcement or reputational risk.

“A legal framework signals that Malaysia welcomes AI driven investment responsibly, with accountability across the AI life- cycle. Without clear rules, trust erodes and trust is essential for sustainable AI growth and foreign investment.

“It ensures innovation grows with safeguards, not at the expense of women, children and vulnerable groups who are often the first to be victims of misuse of AI.

“Embedding accountability across the AI lifecycle also strengthens protection against misuse, including exploitation, harassment and deception,” she said in response to Malaysia’s first AI Governance Bill.

Asked about the challenges in coordinating with other agencies and laws on AI and threats such as deepfakes and AI-enabled scams, Thulasy said AI risks cut across multiple domains, including data protection, cybersecurity, content safety, fraud and consumer protection, requiring close coordination.

As such, she said aligning enforcement while avoiding overlap or gaps between agencies is complex, but necessary to ensure real-world protection, especially for women and children.

“The challenge is balancing speed, clarity, and proportionality without stifling legitimate innovation,” she said.

Cybersecurity expert Fong Choong Fook said the Bill should include risk classifications when it comes to AI systems alongside mandating impact assessments for high-risk AI.

Independent audits and conformity assessments are needed to ensure compliance alongside constant monitoring.

Fong said the Bill should enhance coordination efforts with existing enforcement regulations.

“It should supplement instead of duplicate. The key is ensuring accountability across the entire AI lifecycle.”

Malaysia, he said, should adopt a hybrid model when it comes to regulating AI.

This would comprise the formation of a central AI authority to set standards and coordinate oversight while sector regulators, such as those in the finance and telecommunication industries, carry out enforcement through their own domains.

“This provides consistency without losing on expertise,” he said. On deepfake content, Fong said watermarks must be made mandatory for high-risk and high reach content.

“We also need stronger platform takedown obligations, where platforms must comply with local regulations and will take swift action to remove non-compliant content, upon request” he said.

Universiti Putra Malaysia (UPM) AI specialist Azree Nazri said the Bill should mandate security-by-design standards to mitigate risks such as automated scams, system abuse and AI-enabled attacks.

“High-risk AI systems should undergo mandatory adversarial testing, strict model access controls and continuous monitoring with incident reporting,” he said.

On AI-enabled scams. Azree said telecom style deterrents could form part of new measures to curb this.

He also stressed avoiding regulatory overlap to ensure aligned enforcement, prevent duplicate investigations, and deliver consistent oversight.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: AI Grok Controversy a Case Study in Product Liability

AI Grok Controversy a Case Study in Product Liability

Published by The Star on 15 Jan 2026

by Thulasy Suppiah, Managing Partner

THE decision by the Malaysian Communications and Multimedia Commission (MCMC) to block access to the AI chatbot Grok is a decisive, albeit reactive, measure. This action, taken to prevent content that creates liability under Malaysian laws including Section 233 of the Communications and Multimedia Act 1998, serves as a necessary firebreak against the unchecked proliferation of non-consensual, sexually explicit deepfakes.

However, this incident also underscores the timeliness of the Online Safety Act 2025 (ONSA), which came into force on Jan 1. ONSA fundamentally reshapes the liability landscape by designating social media platforms as Licensed Service Providers. It explicitly classifies child sexual abuse material and financial fraud as ‘priority harmful content’ which must be blocked as swiftly as possible.

While the ban addresses the immediate symptom, we must recognise that the threat is no longer theoretical or confined to foreign platforms. It is local, and it is already in our classrooms.

The case in Johor Bahru last year, where a teenager allegedly used AI to create explicit deepfake images of his schoolmates, was an early warning. More recently, in December 2025, a school in Muar expelled three students for similar conduct, where manipulated images of female classmates were circulated online.

These incidents demonstrate that the technology is accessible, easy to use, and weaponisable by anyone. This highlights the limitations of reactive bans. Even if we block commercial platforms like Grok, open-source models remain accessible to the tech-savvy.

Therefore, for the legal and business fraternity, the Grok controversy is a case study in product liability.

The developers of Grok deployed a tool with known vulnerabilities—specifically, the capability to “digitally undress” subjects, including minors—without adequate safeguards. From a legal standpoint, relying on after-the-fact reporting for foreseeable harms is no longer an acceptable defense. We are witnessing the collision between the Silicon Valley ethos of “move fast and break things” and the sovereign duty of nations to protect human dignity.

Critics often argue that strict regulation will stifle innovation and deter foreign direct investment (FDI). This is a false dichotomy.

High-value, institutional investors and serious technology majors do not seek a regulatory “Wild West.” They seek regulatory certainty. An ecosystem where AI tools can be weaponized to generate pornography or harass citizens is inherently unstable and fraught with legal risk. By enforcing clear standards, Malaysia is not repelling investment; it is filtering out high-risk actors and creating a safe harbour for responsible AI development.

Thus, we must pivot from reactive bans to a proactive “Safety by Design” framework.

Any AI entity seeking market access in Malaysia should be compelled to demonstrate that safety guardrails are intrinsic to the code, not an afterthought. Just as we require safety certifications for imported vehicles or pharmaceuticals, we must require Algorithmic Impact Assessments for generative AI tools. If a platform cannot technically guarantee that it will not generate child sexual abuse material (CSAM) upon a simple prompt, it is not “market-ready.”

Our legal response moving forward must be two-pronged.

First, on the supply side, we must enforce corporate accountability. Tech giants can no longer claim neutrality; if their product design facilitates abuse, they must share the liability.

Second, on the demand side, we need urgent digital legal literacy. The public, especially the youth, must understand that using AI to generate non-consensual explicit imagery is not a “prank” or a technological experiment. It is a potential criminal offence with severe consequences under our Penal Code and the Sexual Offences Against Children Act.

The Grok ban is a necessary firebreak, but it is not a permanent solution. The future belongs to AI, but sustainable innovation requires a social license to operate. Malaysia has the opportunity to lead ASEAN not just in digital adoption, but in crafting a governance framework where technology respects the law, and the law understands technology.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Workforce Must be Prepared to Survive AI Wave

Workforce Must be Prepared to Survive AI Wave

Published by The Star on 4 Dec 2025

by Thulasy Suppiah, Managing Partner

The recent announcement by HP Inc. to cut thousands of jobs globally as part of a pivot towards artificial intelligence is a stark, flashing warning light. It follows similar moves by tech giants like Amazon and Microsoft. This is no longer a distant theoretical disruption; it is a structural realignment of the global workforce happening in real-time. The question we must urgently ask is: Is Malaysia’s workforce prepared to pivot, or will we be left behind?

Locally, the data paints a sobering picture. According to TalentCorp’s 2024 Impact Study, approximately 620,000 jobs—18% of the total workforce in core sectors—are expected to be highly impacted by AI, digitalisation, and the green economy within the next three to five years. When we include medium-impact roles, that figure swells to 1.8 million employees. That is 53% of our workforce facing significant disruption.

While the government has measures in place, a critical gap remains in on-the-ground awareness. Are Malaysian companies thoroughly assessing which roles within their structures are at risk? More importantly, are employees aware that their daily tasks might soon be automated?

This is no longer just about competitiveness; it is about survivability. The speed of AI evolution is relentless. Take the creative and media industries, for example. With the advent of AI video generation tools like Google’s Gemini Veo and Grok’s Imagine, high-quality content can be produced in seconds. For our local media professionals, designers, and content creators, the question isn’t just “can I do it better?” but “is my role still necessary in its current form?”

Productivity is the promise of AI, but productivity without ethics is a liability. We witnessed this grim reality in April, when a teenager in Kulai was arrested for allegedly using AI to create deepfake pornography of schoolmates. This incident raises a terrifying question about our future talent pipeline: as these young digital natives transition into the workforce, do they possess the moral compass to use these powerful tools responsibly? A workforce that is technically literate but ethically bankrupt is a danger to any organisation and the community it serves.

Upskilling is no longer a corporate buzzword for talent retention; it is a necessity for future-proofing our economy. As indicated by the TalentCorp study, skills transferability will become the norm. The ability to pivot—to move from a role that AI displaces to a role that AI enhances—will be the defining trait of the successful Malaysian worker.

We cannot afford to be complacent. The layoffs at HP and other giants are not just business news; they are a preview of the new normal. AI is not waiting for us to be ready. Companies must move beyond basic digital literacy to deep AI literacy, auditing their workflows and preparing their human talent to work alongside machines. Employees must accept that the job they have today may not exist, or will look radically different, in three years.

The window for adaptation is closing fast. We must act with urgency to ensure our workforce is resilient, ethical, and adaptable enough to survive the AI wave, rather than be swept away by it.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Making Malaysia’s AI Budget Deliver

Making Malaysia's AI Budget Deliver

Published by The Star on 13 Oct 2025

by Thulasy Suppiah, Managing Partner

Budget 2026 unequivocally signals Malaysia’s all-in strategy on Artificial Intelligence, positioning it as a core pillar of our national future. The financial commitments are broad and substantial, spanning a nearly RM5.9 billion allocation for cross-ministry research and development, a RM2 billion Sovereign AI Cloud, and various funds to spur industry training and high-impact projects. This ambition is commendable, but ambition, even when well-funded, is no guarantee of success. The critical question now shifts from “what” to “how,” and it is in the execution where our grand vision will either take flight or falter.

A central pillar of our AI strategy is the National AI Office (NAIO), and its RM20 million allocation is a welcome start. The challenge ahead is not a lack of commitment from our various ministries and agencies, which are already pursuing valuable AI initiatives. Rather, it is the risk of fragmentation. To transform these individual efforts into a powerful, cohesive national programme, NAIO’s role must evolve beyond coordination to strategic command. This does not mean replacing the excellent work being done, but empowering NAIO with a cross-ministry portfolio view to prevent redundancy, harmonize standards, and ensure every ringgit of public funds is maximized. By creating a central registry of government AI projects and a single outcomes framework, we can amplify the impact of each agency’s work, ensuring that parallel efforts are converted into a unified, national success story.

Similarly, the budget’s emphasis on talent development is rightly placed. But training more AI graduates is only half the equation; we must ensure our industries are ready to integrate them effectively. Simply funding courses is not enough. We should consider making training grants conditional on tangible outcomes: verified industry placements for graduates, a focus on open, cross-platform tools to avoid proprietary lock-ins, and requirements for short, in-situ implementation cycles with documented results. This ensures we are building a workforce for the real world, not just for the classroom.

The budget’s focus on sovereignty, marked by the launch of the ILMU language model and the Sovereign AI Cloud, is a laudable inflection point. But true sovereignty is not merely about where data resides; it is about who sets the algorithmic and access rules that govern it. The devil, as always, lies in the details. Who will decide which datasets are hosted? How will compute resources be priced for local firms? And most importantly, what are the adoption mechanisms that will compel ministries and SMEs to actually use it? Without clear answers and a robust adoption strategy, even a sovereign cloud risks becoming an impressive but idle monument—a white elephant of good intentions.

One of the budget’s most prescient moves is tasking MIMOS with deepfake detection. This is not a trivial matter; it is a direct response to a clear and present threat. Over the past three years, authorities have had to request the takedown of over 40,000 pieces of AI-generated disinformation. The shocking case in Kulai, where a student allegedly used AI to create explicit deepfakes of schoolmates, brings this danger into sharp focus. This initiative is a crucial and necessary step towards safeguarding our national security and public safety.

Budget 2026 has laid the financial groundwork. It has signaled our intent to the world. If Malaysia is to truly become an AI nation by 2030, the focus must now pivot from macro announcements to micro-implementation. The next budget must not only allocate for global data centres and grand projects, but for the hard, unglamorous work of driving local AI adoption across our SMEs and public services. That is the true measure of a national programme.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: AI, Tenders, and the Trust Deficit

AI, Tenders, and the Trust Deficit

Published by The Star on 26 Sep 2025

by Thulasy Suppiah, Managing Partner

Around the world, the conversation about Artificial Intelligence in public procurement is dominated by the promise of efficiency. The focus is on streamlining processes, automating tasks, and achieving significant cost savings. Studies, such as a recent one by Boston Consulting Group, project remarkable outcomes like up to 15% in savings and a significant reduction in human workload. Yet, in our Malaysian context, to focus solely on these benefits would be to miss a far more critical opportunity: leveraging AI as a frontline tool in the battle against corruption.

The timing could not be more urgent. The recent MACC revelation that Malaysia lost RM277 billion over six years, much of it through collusion in public tenders, is a stark reminder of the deep-seated challenge we face. As we grapple with this reality, the small nation of Albania has embarked on a controversial experiment. Faced with its own entrenched corruption, its government has appointed an AI digital assistant to oversee its entire public procurement process, hoping to create a system free of human bias and graft—a move now facing intense scrutiny from technical and legal experts.

The potential benefits of deploying such technology in Malaysia are immense. Imagine an AI system as an incorruptible digital auditor, capable of analyzing thousands of bids simultaneously. It could flag suspicious patterns invisible to the human eye—interconnected companies winning contracts repeatedly or bids that are consistently just below the threshold for extra scrutiny. By ensuring every decision is data-driven and transparent, we could theoretically restore fairness, save billions in public funds, and begin to rebuild the deep deficit of public trust.

However, recent developments show we must proceed with extreme caution. Experts are now questioning the entire premise of an “incorruptible” AI, pointing out that any system is only as good as the data it is fed. As one political scientist warned, if a corrupt system provides manipulated data, the AI will merely “legitimise old corruption with new software.” This also raises a critical question of accountability—an issue so serious it is being challenged in Albania’s Constitutional Court. If a machine makes a flawed decision, who is responsible?

The most prudent path for Malaysia, therefore, is likely not the appointment of a full “AI minister.” Instead, we should explore a more pragmatic, hybrid model. Let us envision AI not as a replacement for human decision-makers, but as a powerful, mandatory tool to support them. Our MACC, government auditors, and procurement boards could be equipped with AI systems designed to act as a first line of defense. This “digital watchdog” could flag high-risk tenders for stringent human review, catching cases that might otherwise be missed due to simple human oversight or inherent bias. Furthermore, its data-driven recommendations would serve as objective evidence of impartiality, making it much harder for legitimate cases to be dismissed due to personal or political agendas.

The unfolding experiment in Albania, with all its emerging challenges, has opened a vital, global conversation. For a nation like ours, which has lost so much to this long-standing problem, ignoring the potential of technology to enforce integrity is no longer an option. It is time to seriously innovate our way towards better governance.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] NST & The Star Newspaper: AI’s New Watchdog Role: A Necessary Evil or a Step Too Far?

AI's New Watchdog Role: A Necessary Evil or a Step Too Far?

Published by New StraitsTimes and The Star on 11 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent disclosure by Open AI that it is scanning user conversations and reporting certain individuals to law enforcement is a watershed moment. This is not merely a single company’s policy update; it is the opening of a Pandora’s box of ethical, legal, and societal questions that will define our future relationship with artificial intelligence.

On the one hand, the impulse behind this move is tragically understandable. These powerful AI tools, for all their potential, have demonstrated a capacity to cause profound real-world harm. Consider the devastating case of Adam Raine, the teenager who died by suicide after his anxieties were reportedly validated and encouraged by ChatGPT. In the face of such genuine, actual harm, the argument for intervention by AI operators is compelling. A platform that can be used to plan violence cannot feign neutrality.

On the other hand, the solution now being pioneered by an industry leader is deeply unsettling. While OpenAI has clarified it will not report instances of self-harm, citing user privacy, the fundamental act of systematically scanning all private conversations to preemptively identify other threats sets a chilling, Orwellian precedent. It inches us perilously close to a world of pre-crime, where individuals are flagged not for their actions, but for their thoughts and words. This raises a fundamental question: where do we draw the line? Should a user who morbidly asks any AI “how to commit the perfect murder” be arrested and interrogated? If this becomes the industry standard, we risk crossing over into a genuine dystopia.

This move is made all the more problematic by the central contradiction it exposes. OpenAI justifies this immense privacy encroachment as a necessary safety measure, yet it simultaneously presents itself as a staunch defender of user privacy in its high-stakes legal battle with the New York Times. It cannot have it both ways. This reveals the untenable position of a company caught between the catastrophic consequences of its own technology and a heavy-handed response that flies in the face of its public promises—a dilemma that any AI developer adopting a similar watchdog role will inevitably face.

We are at a critical juncture. The danger of AI-facilitated harm is real, but so is the danger of ubiquitous, automated surveillance becoming the norm. This conversation, sparked by OpenAI, cannot remain confined to the tech industry and its regulators; it is now a matter for society at large. We urgently need a broad public debate to establish clear and transparent protocols for how such situations are handled by the entire industry, and how they are treated by law enforcement and the judiciary. Without them, we risk normalizing a future governed by algorithmic suspicion. This is a line that, once crossed, may be impossible to uncross.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles