[Feature Article] The Star Newspaper: Workforce Must be Prepared to Survive AI Wave

Workforce Must be Prepared to Survive AI Wave

Published by The Star on 4 Dec 2025

by Thulasy Suppiah, Managing Partner

The recent announcement by HP Inc. to cut thousands of jobs globally as part of a pivot towards artificial intelligence is a stark, flashing warning light. It follows similar moves by tech giants like Amazon and Microsoft. This is no longer a distant theoretical disruption; it is a structural realignment of the global workforce happening in real-time. The question we must urgently ask is: Is Malaysia’s workforce prepared to pivot, or will we be left behind?

Locally, the data paints a sobering picture. According to TalentCorp’s 2024 Impact Study, approximately 620,000 jobs—18% of the total workforce in core sectors—are expected to be highly impacted by AI, digitalisation, and the green economy within the next three to five years. When we include medium-impact roles, that figure swells to 1.8 million employees. That is 53% of our workforce facing significant disruption.

While the government has measures in place, a critical gap remains in on-the-ground awareness. Are Malaysian companies thoroughly assessing which roles within their structures are at risk? More importantly, are employees aware that their daily tasks might soon be automated?

This is no longer just about competitiveness; it is about survivability. The speed of AI evolution is relentless. Take the creative and media industries, for example. With the advent of AI video generation tools like Google’s Gemini Veo and Grok’s Imagine, high-quality content can be produced in seconds. For our local media professionals, designers, and content creators, the question isn’t just “can I do it better?” but “is my role still necessary in its current form?”

Productivity is the promise of AI, but productivity without ethics is a liability. We witnessed this grim reality in April, when a teenager in Kulai was arrested for allegedly using AI to create deepfake pornography of schoolmates. This incident raises a terrifying question about our future talent pipeline: as these young digital natives transition into the workforce, do they possess the moral compass to use these powerful tools responsibly? A workforce that is technically literate but ethically bankrupt is a danger to any organisation and the community it serves.

Upskilling is no longer a corporate buzzword for talent retention; it is a necessity for future-proofing our economy. As indicated by the TalentCorp study, skills transferability will become the norm. The ability to pivot—to move from a role that AI displaces to a role that AI enhances—will be the defining trait of the successful Malaysian worker.

We cannot afford to be complacent. The layoffs at HP and other giants are not just business news; they are a preview of the new normal. AI is not waiting for us to be ready. Companies must move beyond basic digital literacy to deep AI literacy, auditing their workflows and preparing their human talent to work alongside machines. Employees must accept that the job they have today may not exist, or will look radically different, in three years.

The window for adaptation is closing fast. We must act with urgency to ensure our workforce is resilient, ethical, and adaptable enough to survive the AI wave, rather than be swept away by it.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Making Malaysia’s AI Budget Deliver

Making Malaysia's AI Budget Deliver

Published by The Star on 13 Oct 2025

by Thulasy Suppiah, Managing Partner

Budget 2026 unequivocally signals Malaysia’s all-in strategy on Artificial Intelligence, positioning it as a core pillar of our national future. The financial commitments are broad and substantial, spanning a nearly RM5.9 billion allocation for cross-ministry research and development, a RM2 billion Sovereign AI Cloud, and various funds to spur industry training and high-impact projects. This ambition is commendable, but ambition, even when well-funded, is no guarantee of success. The critical question now shifts from “what” to “how,” and it is in the execution where our grand vision will either take flight or falter.

A central pillar of our AI strategy is the National AI Office (NAIO), and its RM20 million allocation is a welcome start. The challenge ahead is not a lack of commitment from our various ministries and agencies, which are already pursuing valuable AI initiatives. Rather, it is the risk of fragmentation. To transform these individual efforts into a powerful, cohesive national programme, NAIO’s role must evolve beyond coordination to strategic command. This does not mean replacing the excellent work being done, but empowering NAIO with a cross-ministry portfolio view to prevent redundancy, harmonize standards, and ensure every ringgit of public funds is maximized. By creating a central registry of government AI projects and a single outcomes framework, we can amplify the impact of each agency’s work, ensuring that parallel efforts are converted into a unified, national success story.

Similarly, the budget’s emphasis on talent development is rightly placed. But training more AI graduates is only half the equation; we must ensure our industries are ready to integrate them effectively. Simply funding courses is not enough. We should consider making training grants conditional on tangible outcomes: verified industry placements for graduates, a focus on open, cross-platform tools to avoid proprietary lock-ins, and requirements for short, in-situ implementation cycles with documented results. This ensures we are building a workforce for the real world, not just for the classroom.

The budget’s focus on sovereignty, marked by the launch of the ILMU language model and the Sovereign AI Cloud, is a laudable inflection point. But true sovereignty is not merely about where data resides; it is about who sets the algorithmic and access rules that govern it. The devil, as always, lies in the details. Who will decide which datasets are hosted? How will compute resources be priced for local firms? And most importantly, what are the adoption mechanisms that will compel ministries and SMEs to actually use it? Without clear answers and a robust adoption strategy, even a sovereign cloud risks becoming an impressive but idle monument—a white elephant of good intentions.

One of the budget’s most prescient moves is tasking MIMOS with deepfake detection. This is not a trivial matter; it is a direct response to a clear and present threat. Over the past three years, authorities have had to request the takedown of over 40,000 pieces of AI-generated disinformation. The shocking case in Kulai, where a student allegedly used AI to create explicit deepfakes of schoolmates, brings this danger into sharp focus. This initiative is a crucial and necessary step towards safeguarding our national security and public safety.

Budget 2026 has laid the financial groundwork. It has signaled our intent to the world. If Malaysia is to truly become an AI nation by 2030, the focus must now pivot from macro announcements to micro-implementation. The next budget must not only allocate for global data centres and grand projects, but for the hard, unglamorous work of driving local AI adoption across our SMEs and public services. That is the true measure of a national programme.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: AI, Tenders, and the Trust Deficit

AI, Tenders, and the Trust Deficit

Published by The Star on 26 Sep 2025

by Thulasy Suppiah, Managing Partner

Around the world, the conversation about Artificial Intelligence in public procurement is dominated by the promise of efficiency. The focus is on streamlining processes, automating tasks, and achieving significant cost savings. Studies, such as a recent one by Boston Consulting Group, project remarkable outcomes like up to 15% in savings and a significant reduction in human workload. Yet, in our Malaysian context, to focus solely on these benefits would be to miss a far more critical opportunity: leveraging AI as a frontline tool in the battle against corruption.

The timing could not be more urgent. The recent MACC revelation that Malaysia lost RM277 billion over six years, much of it through collusion in public tenders, is a stark reminder of the deep-seated challenge we face. As we grapple with this reality, the small nation of Albania has embarked on a controversial experiment. Faced with its own entrenched corruption, its government has appointed an AI digital assistant to oversee its entire public procurement process, hoping to create a system free of human bias and graft—a move now facing intense scrutiny from technical and legal experts.

The potential benefits of deploying such technology in Malaysia are immense. Imagine an AI system as an incorruptible digital auditor, capable of analyzing thousands of bids simultaneously. It could flag suspicious patterns invisible to the human eye—interconnected companies winning contracts repeatedly or bids that are consistently just below the threshold for extra scrutiny. By ensuring every decision is data-driven and transparent, we could theoretically restore fairness, save billions in public funds, and begin to rebuild the deep deficit of public trust.

However, recent developments show we must proceed with extreme caution. Experts are now questioning the entire premise of an “incorruptible” AI, pointing out that any system is only as good as the data it is fed. As one political scientist warned, if a corrupt system provides manipulated data, the AI will merely “legitimise old corruption with new software.” This also raises a critical question of accountability—an issue so serious it is being challenged in Albania’s Constitutional Court. If a machine makes a flawed decision, who is responsible?

The most prudent path for Malaysia, therefore, is likely not the appointment of a full “AI minister.” Instead, we should explore a more pragmatic, hybrid model. Let us envision AI not as a replacement for human decision-makers, but as a powerful, mandatory tool to support them. Our MACC, government auditors, and procurement boards could be equipped with AI systems designed to act as a first line of defense. This “digital watchdog” could flag high-risk tenders for stringent human review, catching cases that might otherwise be missed due to simple human oversight or inherent bias. Furthermore, its data-driven recommendations would serve as objective evidence of impartiality, making it much harder for legitimate cases to be dismissed due to personal or political agendas.

The unfolding experiment in Albania, with all its emerging challenges, has opened a vital, global conversation. For a nation like ours, which has lost so much to this long-standing problem, ignoring the potential of technology to enforce integrity is no longer an option. It is time to seriously innovate our way towards better governance.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] NST & The Star Newspaper: AI’s New Watchdog Role: A Necessary Evil or a Step Too Far?

AI's New Watchdog Role: A Necessary Evil or a Step Too Far?

Published by New StraitsTimes and The Star on 11 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent disclosure by Open AI that it is scanning user conversations and reporting certain individuals to law enforcement is a watershed moment. This is not merely a single company’s policy update; it is the opening of a Pandora’s box of ethical, legal, and societal questions that will define our future relationship with artificial intelligence.

On the one hand, the impulse behind this move is tragically understandable. These powerful AI tools, for all their potential, have demonstrated a capacity to cause profound real-world harm. Consider the devastating case of Adam Raine, the teenager who died by suicide after his anxieties were reportedly validated and encouraged by ChatGPT. In the face of such genuine, actual harm, the argument for intervention by AI operators is compelling. A platform that can be used to plan violence cannot feign neutrality.

On the other hand, the solution now being pioneered by an industry leader is deeply unsettling. While OpenAI has clarified it will not report instances of self-harm, citing user privacy, the fundamental act of systematically scanning all private conversations to preemptively identify other threats sets a chilling, Orwellian precedent. It inches us perilously close to a world of pre-crime, where individuals are flagged not for their actions, but for their thoughts and words. This raises a fundamental question: where do we draw the line? Should a user who morbidly asks any AI “how to commit the perfect murder” be arrested and interrogated? If this becomes the industry standard, we risk crossing over into a genuine dystopia.

This move is made all the more problematic by the central contradiction it exposes. OpenAI justifies this immense privacy encroachment as a necessary safety measure, yet it simultaneously presents itself as a staunch defender of user privacy in its high-stakes legal battle with the New York Times. It cannot have it both ways. This reveals the untenable position of a company caught between the catastrophic consequences of its own technology and a heavy-handed response that flies in the face of its public promises—a dilemma that any AI developer adopting a similar watchdog role will inevitably face.

We are at a critical juncture. The danger of AI-facilitated harm is real, but so is the danger of ubiquitous, automated surveillance becoming the norm. This conversation, sparked by OpenAI, cannot remain confined to the tech industry and its regulators; it is now a matter for society at large. We urgently need a broad public debate to establish clear and transparent protocols for how such situations are handled by the entire industry, and how they are treated by law enforcement and the judiciary. Without them, we risk normalizing a future governed by algorithmic suspicion. This is a line that, once crossed, may be impossible to uncross.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Charting a Sustainable Course for Johor’s Data Centre Boom

Charting a Sustainable Course for Johor's Data Centre Boom

Published by The Star on 9 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent stop-work order issued to a data centre project in Iskandar Puteri marks an important inflection point for Johor. Rather than viewing it as a setback, we should see it as a natural consequence of success—a sign that Johor’s ambition to become a regional digital powerhouse is rapidly becoming a reality, and a prompt for us to thoughtfully consider the path ahead.

The state government’s efforts in attracting these high-value investments are commendable, and the scale of development is truly significant. With 13 data centres already operational and another 15 currently under construction in Johor, it is clear these facilities are a cornerstone of the Digital Johor agenda and the Johor-Singapore Special Economic Zone. They promise to create thousands of skilled jobs, spur technological innovation, and solidify Malaysia’s position on the global stage. This economic momentum is vital and should be nurtured.

However, this commendable success naturally brings with it new responsibilities. The concerns raised by the local community in Iskandar Puteri—from environmental disruption to late-night construction—highlight the critical need to create a symbiotic relationship between these large-scale developments and the communities they inhabit. The challenge, therefore, is not one of ambition, but of integration and balance.

In navigating this, we can learn from the diverse experiences of other nations. Ireland, for example, demonstrates the potential pitfalls when infrastructure development and energy planning do not keep pace with the industry’s rapid growth. Its data centres now place significant strain on the national power grid, raising public concerns about energy security and climate goals. On the other end of the spectrum, Amsterdam faced hard physical limits on its land and power grid, forcing a difficult choice to pause new development to prioritize other urban needs.

A more strategic benchmark might be Singapore. After its own moratorium, Singapore re-engaged the data centre market with a clear focus on quality over quantity. By implementing stringent energy efficiency standards, it has strategically positioned itself as a premium destination for best-in-class operators who are aligned with sustainability goals. This approach proves that strong environmental governance can be a powerful competitive advantage, attracting responsible, long-term investment.

For Johor and Malaysia, this moment presents an opportunity to architect a sustainable roadmap for our digital future. The goal should not be to slow down growth, but to steer it in a direction that is both economically prosperous and socially responsible. The government can lead the way by proactively engaging with the developers of all current and future projects, ensuring that clear guidelines for sustainable and community-centric development are understood and implemented from the outset.

By doing so, we can build confidence among both investors and the public. Let us use this opportunity to pioneer a balanced model for data centre development—one that harnesses their immense economic potential while safeguarding our environmental heritage and enhancing the well-being of our communities. This is how we can secure our position not just as a digital hub, but as a model for sustainable digital transformation.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

AI, Deepfakes, and the Right to Your Digital Selves

AI, Deepfakes, and the Right to Your Digital Selves

by Thulasy Suppiah, Managing Partner

As societies globally grapple with the disturbing rise of AI-generated deepfakes, a challenge highlighted by recent incidents abroad and here in Malaysia, Denmark has just proposed a groundbreaking solution that demands our attention. The Danish government plans to amend its copyright law to give every individual the right to their own body, facial features, and voice. This is a profound and necessary step in protecting human identity in the digital age.

For too long, the debate around deepfakes has been framed primarily as an issue of privacy or harassment, often placing a heavy burden on victims to prove harm after their likeness has been violated and spread across the internet. This new approach fundamentally shifts the paradigm. By treating a person’s identity—their face, their voice—as a form of personal intellectual property, it grants them a clear right of ownership.

This is not merely a subtle legal change; it is a game-changer. It means a victim would no longer need to prove reputational damage or malicious intent, which can be difficult and retraumatising. Instead, the case becomes a simpler one of unauthorised use of their “property.” This empowers the individual with a powerful legal shield and a direct path to demand removal of content and seek compensation.

Crucially, such a framework also establishes clear accountability for the tech platforms where this content proliferates. By outlining significant consequences for non-compliance, it sets clear legal and financial expectations for social media and messaging companies. This effectively transitions the responsibility from a reactive content moderation process to a proactive legal obligation, creating a clear imperative for them to prioritise the swift handling of non-consensual deepfakes.

While our authorities are rightly using existing laws like the Communications and Multimedia Act to prosecute perpetrators, these are often reactive measures. The kind of proactive governance being proposed in Denmark anticipates the inevitable misuse of rapidly advancing AI and creates a robust defence before the next wave of more realistic and accessible deepfake tools becomes available. It’s an attempt to legislate for the world we are entering, not the one we are leaving behind.

Of course, any such law must include exceptions for satire and parody to protect free expression. But the core principle remains: your digital likeness belongs to you.

As Malaysia continues its journey into the digital economy, we must consider if our own legal frameworks are truly fit for the AI era. The Danish model offers a compelling vision for how to restore digital autonomy and protect the dignity of our citizens. It sends an unequivocal message that a person cannot simply be run through a digital copy machine for any purpose, malicious or otherwise, without their consent. It is a thought-provoking and essential conversation we need to have now.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Featured Article: The World’s Rebalancing Act: Malaysia’s Moment to Shine

The World's Rebalancing Act: Malaysia's Moment to Shine

Published by The Star on 6 Mar 2025

by Thulasy Suppiah, Managing Partner

The global economic landscape is undergoing a profound transformation, driven by geopolitical realignments, most notably the US-China tech rivalry, and a widespread corporate imperative to ‘de-risk’ and ‘decouple’ supply chains. In this shifting terrain, Malaysia has admirably positioned itself as a stable and attractive hub for foreign direct investment (FDI). Microsoft’s recent reaffirmation of its substantial RM10.5 billion investment in cloud and AI infrastructure here, despite global pullbacks elsewhere, is a powerful testament to this trend and a vote of confidence in our nation’s potential.

This ‘flight to safety’ or search for strategic alternatives by multinational corporations (MNCs) presents a golden opportunity for Malaysia. We are currently benefiting as companies seek to diversify their operations and mitigate risks associated with over-concentration in any single market, particularly in light of ongoing trade disputes, semiconductor export controls, and vulnerabilities exposed by past global disruptions.

But this favourable tide is not self-sustaining. The very forces that benefit us today – trade tensions, potential tariffs, and shifting alliances – create an inherently volatile environment. To ensure Malaysia not only attracts but also retains high-quality FDI and solidifies its position as a key player in the global economy for years to come, we must adopt proactive and far-sighted strategies, rather than merely reacting to external pressures.

Firstly, strengthening our domestic fundamentals is non-negotiable. This means aggressive investment in a future-ready workforce through upskilling and reskilling initiatives, particularly in high-tech sectors like AI and advanced manufacturing. We need to cultivate a generation that are not just consumers of technology but creators and innovators. Continuous upgrades to our digital and physical infrastructure, including sustainable energy solutions for power-hungry data centres, are also paramount.

Secondly, our policy and regulatory environment must be a hallmark of stability, clarity, and adaptive agility. Predictable long-term policies, a streamlined bureaucracy that champions ease of doing business, and transparent enforcement are critical. Our regulatory frameworks must be robust enough to ensure good governance but flexible enough to accommodate and encourage innovation, being responsive to the needs of a rapidly evolving global economy.

Thirdly, a concerted effort to move Malaysia up the global value chain is essential. This involves strategically fostering indigenous innovation and attracting investments that bring not just capital, but also cutting-edge technology, R&D activities, and opportunities for local SMEs to integrate into sophisticated global supply chains. Focusing on niche specialisations where Malaysia can build a distinct competitive advantage will be key.

Finally, our international engagement and trade diplomacy must be astute and proactive. We need to continuously champion Malaysia as a reliable, neutral, and pro-business partner on the global stage, strengthening beneficial trade agreements and maintaining open dialogues with MNCs to understand their long-term strategies and concerns.

Malaysia currently finds itself in an enviable position, benefiting from global economic restructuring. However, this is not a moment for complacency but for concerted, strategic action. By building on our current strengths and proactively addressing future challenges, we can ensure Malaysia is not merely a beneficiary of transient global shifts, but a resilient and proactive architect of its own enduring economic prosperity.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Malaysia’s New Data Act: High Hopes, High Stakes

Malaysia's New Data Act: High Hopes, High Stakes

Published by The Star on 8 May 2025

by Thulasy Suppiah, Managing Partner

The recent enactment of the Data Sharing Act 2025 marks a significant step in Malaysia’s digital journey. The potential benefits are clear: enhanced public services through better agency coordination, data-driven decision-making, and a vital boost to our burgeoning AI ecosystem, aligning with the MADANI government’s aspirations. Creating a legal framework for inter-agency data sharing is indeed necessary.

However, as this Act takes its first steps, its success hinges critically on more than just legislative intent. For the public, the promise of efficiency must be balanced with robust assurance of security. We cannot overlook the context of past incidents involving significant leaks of Malaysians’ personal data allegedly linked to government systems. This history naturally fuels public apprehension.

It’s crucial to remember that the Personal Data Protection Act (PDPA) 2010 does not apply to federal or state governments. Therefore, the safeguards, evaluation criteria, and oversight mechanisms embedded within this new Data Sharing Act carry immense weight – they are the primary line of defence governing how citizen data is handled between government bodies.

While the establishment of the National Data Sharing Committee is welcome, its effectiveness will depend entirely on rigorous implementation and strict adherence to protocols. Simply having an Act is insufficient; the underlying cybersecurity infrastructure across all participating agencies must be demonstrably strong and resilient against breaches. Public confidence needs to be earned, not assumed.

Therefore, alongside implementing this Act, there must be a transparent commitment to significantly upgrading government digital infrastructure and cybersecurity capabilities. Assurances must be backed by visible action.

The Data Sharing Act 2025 provides a foundation. Now, the hard work begins: building a secure, trustworthy system that delivers the promised benefits without compromising the personal data Malaysians entrust to the government. Its success will ultimately be measured not just by shared data points, but by the public’s confidence in its protection.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

AI in the Lawmaker’s Seat: Progress or Peril?

AI in the Lawmaker's Seat: Progress or Peril?

Published on 03 May 2025

by Thulasy Suppiah, Managing Partner

The recent announcement that the United Arab Emirates intends to use artificial intelligence (AI) to help draft, review, and even suggest updates to its laws is a truly groundbreaking development. Presented as a world first, this move goes far beyond the global discussion about regulating AI; it steps into the territory of governing with AI, promising huge gains in legislative speed and efficiency.

While the allure of faster, more precise lawmaking is understandable, particularly given the UAE’s projections of boosting GDP and reducing costs, this pioneering approach warrants careful consideration and raises profound questions. The core concern isn’t just about technical accuracy – though experts rightly warn that current AI systems still suffer from reliability issues and can “hallucinate.” It cuts deeper, touching upon the very nature of lawmaking itself.

Firstly, the essential human element risks being sidelined. Lawmaking isn’t merely an exercise in processing data; it involves intricate negotiation, societal debate, compromise, and the embedding of cultural values. Can an algorithm truly replicate the nuances of human deliberation? Will laws significantly shaped by AI command the same legitimacy in the eyes of the public if the human process of debate and drafting is diminished?

Secondly, the risk of manipulation cannot be ignored. AI systems learn from the data they are fed and operate based on the parameters they are given. Whoever controls these inputs – the training datasets, the prioritised principles – could potentially steer legislative outcomes in subtle, perhaps undetectable ways, embedding hidden agendas into the legal fabric.

Furthermore, AI might strive for a level of logical consistency that clashes with the necessary flexibility of human society. Our laws often contain deliberate ambiguities, allowing for interpretation by courts based on evolving norms and specific circumstances. An AI optimising purely for consistency might produce rigid frameworks ill-suited to real-world complexities.

The security implications are also immense. A centralised AI system involved in drafting national laws would inevitably become a prime target for sophisticated cyberattacks. A successful breach could allow malicious actors to influence or corrupt foundational legal structures, potentially causing widespread disruption before being detected.

Finally, there are potential ethical framework conflicts. An AI trained on supposedly “global best practices” or diverse international datasets might inadvertently propose legal concepts or norms that conflict with a nation’s specific cultural identity, religious principles, or local traditions.

For nations like Malaysia, observing this bold Emirati experiment, the path forward requires careful thought. We should certainly embrace AI’s potential to assist governance and make processes more efficient. However, the UAE’s initiative underscores the urgent need for us to develop robust national frameworks before venturing down a similar path. Any integration of AI into critical functions like lawmaking must be governed by stringent ethical guidelines, transparency, and crucially, ensure that the human touch – deliberation, ethical judgment, and final approval – remains central and paramount. Balancing the power of AI with the wisdom of human oversight is key to ensuring technology serves society, not the other way around.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Are Our Children Ready for the AI Revolution?

Are Our Children Ready for the AI Revolution?

Published by The Star on 25 Apr 2025

by Thulasy Suppiah, Managing Partner

The disturbing news from a Malaysian school, where a student allegedly used artificial intelligence to create and distribute explicit deepfakes of schoolmates, is a stark wake-up call. While shocking, this incident is sadly not isolated. Reports from South Korea show deepfake-related digital sex crimes more than tripled last year, overwhelmingly targeting young people – a chilling indicator of a rapidly escalating global problem fueled by increasingly powerful and accessible AI.

We cannot simply ban these technologies; AI is becoming deeply integrated into our world, and its capabilities are expanding daily. The critical issue is not access, but understanding. Are our young people, who are readily adopting these tools, truly aware of the profound harm they can inflict? Do they grasp the ethical implications and potential legal consequences of manipulating someone’s image, particularly for creating non-consensual explicit content?

This situation demands a societal response as serious and sustained as our long-standing campaigns against smoking, drug abuse, or bullying. It’s not enough to simply react after harm is done. We urgently need comprehensive educational initiatives within schools to teach the responsible and ethical use of AI. Young people must understand how easily these tools can be misused and the devastating impact such actions have on the lives and well-being of their peers.
Furthermore, the responsibility extends beyond the classroom. Parents need to be more vigilant and engaged in monitoring their children’s online activities and AI usage. Perhaps this incident also forces us all to reconsider the images we share so freely on social media, now that they can be easily downloaded and weaponised through AI with malicious intent.

Finally, our legal and regulatory frameworks must evolve rapidly. While existing laws are being applied, we need clearer, specific measures to address the unique challenges posed by AI misuse, offering stronger protections, especially for minors who are disproportionately targeted.
Such incidents are painful reminders that powerful tools can be used irresponsibly. As AI continues its advance, proactive education, increased parental awareness, and updated regulations are not just options – they are essential to safeguarding our communities, particularly our children, from this emerging digital threat

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles