NAVIGATING ONSA THROUGH SAFETY BY DESIGN
By Thulasy Suppiah, Managing Partner of Suppiah & Partners
The recent US$375mil verdict against Meta in a New Mexico court represents a watershed moment in digital governance. While the staggering financial penalty has dominated headlines, the true significance lies in the legal precedent it establishes for corporate risk and product liability in the tech sector.
Crucially, the jury did not penalise the platform merely for a failure in content moderation. The liability was rooted in the finding that the platform’s core recommendation algorithms actively steered underage users towards harmful material, violating unfair practices laws. This verdict effectively signals the death knell for the industry’s legacy playbook of reactive content moderation.
For multinational tech companies operating in Malaysia, this global legal shift arrives at a critical juncture. Under our Online Safety Act 2025 (ONSA), tech executives face personal liability for platform failures. However, the legislation provides a crucial defence clause, allowing leadership to avoid liability if they can demonstrate they took “reasonable steps” to prevent the offence.
The New Mexico verdict serves as a stark warning on how courts and regulators will interpret this threshold moving forward. Relying on after-the-fact measures, such as launching new parental controls or relying on human moderators only after a crisis has occurred, is no longer a viable legal strategy. As public scrutiny intensifies, this landmark verdict demonstrates that relying on reactive fixes is an increasingly perilous legal position when the underlying product design remains fundamentally flawed.
Instead of viewing legislation like ONSA as a hostile threat, the tech industry must embrace “safety by design” as its ultimate corporate shield. Implementing mandatory Algorithmic Impact Assessments before launching new features is no longer just red tape. It is the most effective way to transform unpredictable litigation risks into a predictable, manageable compliance framework.
By building architectural safety measures into their code from the outset, platforms provide a clear, auditable trail of these “reasonable steps”, thereby protecting their executives and ensuring regulatory certainty. Beyond mere legal compliance, there is a profound governance and reputational imperative. Tech giants play an undeniable role in shaping society, and the loss of parental trust is a devastating blow to long-term brand equity.
Ensuring the safety of children and making parents feel secure that their families are protected online is not just a moral obligation. It is foundational to maintaining a platform’s social license to operate.
Ultimately, robust digital governance is a competitive advantage. By proactively pivoting from reactive moderation to structural safety by design, tech platforms can simultaneously protect their leadership under ONSA, fulfill their societal responsibilities, and secure the enduring trust of their user base.
Just as we require safety certifications for physical infrastructure, we must now demand Algorithmic Impact Assessments from our digital landlords. The message is unequivocal: the future belongs to these algorithmic platforms, but their deployment requires a social license to operate.
© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.
More Featured Articles
REACH US
SUPPIAH & PARTNERS
(Formerly Law Office of Suppiah)
NAVIGATION
ARTICLES
- COPYRIGHT © 2025 SUPPIAH & PARTNERS (Formerly Law Office of Suppiah) ALL RIGHTS RESERVED
- HTML SITEMAP
- PRIVACY POLICY



