[Feature Article] The Star Newspaper: Workforce Must be Prepared to Survive AI Wave

Workforce Must be Prepared to Survive AI Wave

Published by The Star on 4 Dec 2025

by Thulasy Suppiah, Managing Partner

The recent announcement by HP Inc. to cut thousands of jobs globally as part of a pivot towards artificial intelligence is a stark, flashing warning light. It follows similar moves by tech giants like Amazon and Microsoft. This is no longer a distant theoretical disruption; it is a structural realignment of the global workforce happening in real-time. The question we must urgently ask is: Is Malaysia’s workforce prepared to pivot, or will we be left behind?

Locally, the data paints a sobering picture. According to TalentCorp’s 2024 Impact Study, approximately 620,000 jobs—18% of the total workforce in core sectors—are expected to be highly impacted by AI, digitalisation, and the green economy within the next three to five years. When we include medium-impact roles, that figure swells to 1.8 million employees. That is 53% of our workforce facing significant disruption.

While the government has measures in place, a critical gap remains in on-the-ground awareness. Are Malaysian companies thoroughly assessing which roles within their structures are at risk? More importantly, are employees aware that their daily tasks might soon be automated?

This is no longer just about competitiveness; it is about survivability. The speed of AI evolution is relentless. Take the creative and media industries, for example. With the advent of AI video generation tools like Google’s Gemini Veo and Grok’s Imagine, high-quality content can be produced in seconds. For our local media professionals, designers, and content creators, the question isn’t just “can I do it better?” but “is my role still necessary in its current form?”

Productivity is the promise of AI, but productivity without ethics is a liability. We witnessed this grim reality in April, when a teenager in Kulai was arrested for allegedly using AI to create deepfake pornography of schoolmates. This incident raises a terrifying question about our future talent pipeline: as these young digital natives transition into the workforce, do they possess the moral compass to use these powerful tools responsibly? A workforce that is technically literate but ethically bankrupt is a danger to any organisation and the community it serves.

Upskilling is no longer a corporate buzzword for talent retention; it is a necessity for future-proofing our economy. As indicated by the TalentCorp study, skills transferability will become the norm. The ability to pivot—to move from a role that AI displaces to a role that AI enhances—will be the defining trait of the successful Malaysian worker.

We cannot afford to be complacent. The layoffs at HP and other giants are not just business news; they are a preview of the new normal. AI is not waiting for us to be ready. Companies must move beyond basic digital literacy to deep AI literacy, auditing their workflows and preparing their human talent to work alongside machines. Employees must accept that the job they have today may not exist, or will look radically different, in three years.

The window for adaptation is closing fast. We must act with urgency to ensure our workforce is resilient, ethical, and adaptable enough to survive the AI wave, rather than be swept away by it.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Evolving Regulatory Landscape for Digital & Tech and the Latest Cybersecurity Act in Malaysia

Evolving Regulatory Landscape for Digital & Tech and the Latest Cybersecurity Act in Malaysia

By Thulasy Suppiah, Managing Partner of Suppiah & Partners &
Adjunct Professor Murugason R. Thangarathnam, Chief Executive Officer of Novem CS

Introduction

Malaysia has been resolutely updating its digital and technology regulations with forward-looking policies. They signify the nation’s aspirations to strengthen areas such as online safety, cybersecurity and data protection and governance, and to address the complex and global nature of the digital environment. Given the severity of potential harms, self-regulation by tech companies is insufficient to protect individuals and maintain trust. By strengthening data governance and establishing frameworks like the National Guidelines on AI Governance & Ethics, Malaysia is actively working to build a trusted and secure digital ecosystem for both consumers and businesses.

Several important developments have transpired in Malaysia’s digital regulatory landscape especially in the last two years, indicative of the government’s strong commitment to cultivate a safe digital ecosystem. For businesses operating or looking to operate in Malaysia, especially businesses in the telecommunications, technology, information security, or other infrastructure sectors, let us hold your hands and take you through these important developments.

First, the Ministry of Communications and Digital was separated into two ministries – the Ministry of Digital and the Ministry of Communications. The separation in 2023, clarified mandates for communications regulations versus digital governance. The Ministry of Digital now oversees the Personal Data Protection Department (PDPD) and, through its Minister Gobind Singh Deo, has proposed a Data Commission to execute the Data Sharing Act.

Then in August 2024, The Cyber Security Act 2024 (Act 854) came into force. This is a landmark piece of legislation in Malaysia aimed at strengthening the nation’s cyber defences and resilience against evolving cyber threats.

As of June 2025, major amendments to the Personal Data Protection Act (PDPA) took effect. The amendments include new requirements for mandatory data breach notification, the right to data portability, and the appointment of a Data Protection Officer (DPO). Businesses acting as data processors now face direct security obligations, while maximum fines for non-compliance have more than tripled to RM 1,000,000.

Malaysia was the first ASEAN Member State to enact a comprehensive data protection legislation in 2010 but the recent amendments align Malaysia’s data protection standards more closely with influential international frameworks like the EU’s GDPR (General Data Protection Regulation).

This paper aims to breakdown the key components and implications of the Cyber Security Act 2024 (CSA), vital to protect our digital environment and earn the trust of all Malaysians.

Overview of Malaysia’s Latest Cybersecurity Act

Key provisions and scope

The CSA 2024 establishes Malaysia’s digital defence framework by certifying the National Cyber Security Committee (NACSA) as the national lead agency with legislative power to ensure the effective implementation of this Act. It outlines the duties and powers of the Chief Executive of NACSA, as well as the functions and duties of the National Critical Information Infrastructure (NCII) sector leads and NCII entities.

The NCII is essentially the central nervous system of a country—the most vital computer systems, networks, and data that keep essential services like banking, electricity, telecommunications, and agriculture, running – the stuff that absolutely must work for society to function normally. It is the information and the digital technology that is so important to a nation that if it were to be shut down, destroyed, or seriously damaged, it would have a devastating impact on national security, the economy, or public health and safety.

The CSA sets the mandatory cybersecurity standards for NCII operators, and creates a licensing regime for cybersecurity service providers to regulate incident response and practice across the country. The Act also has extra-territorial application, to the extent that it imposes requirements for any NCII that “is wholly or partly in Malaysia”.

Objectives and regulatory framework

The primary goal of the CSA is to ensure a secure, trusted, and resilient cyberspace in Malaysia and to safeguard critical national functions. Its key objectives can be broken down as such:

  • To enhance Malaysia’s overall cyber defence capabilities and resilience against emerging and sophisticated cyber threats.
  • To establish a comprehensive legislative framework for the protection of the National Critical Information Infrastructure (NCII)
  • To establish the necessary governmental structures and legal powers to oversee national cybersecurity policies, with the NACSA as the lead implementing and enforcement agency.
  • To regulate the quality and integrity of the cybersecurity services provided in Malaysia through a mandatory licensing regime.
  • To institute clear, mandatory standards for managing cyber threats and reporting cyber security incidents, particularly those affecting the NCII.

The CSA identifies the 11 sectors designated as NCII sectors, and mandates strict compliance for organisations operating within them.

These sectors, listed below, are now legally required to enhance their cyber resilience or face penalties:

  • Agriculture & Plantation
  • Banking & Finance
  • Defence & National Security
  • Energy
  • Government
  • Healthcare Services
  • Information (Communication & Digital)
  • Science, Technology, & Innovation
  • Trade, Industry, & Economy
  • Transportation
  • Water, Sewage, & Waste Management

To manage the 11 NCII sectors, the Act allows the Minister to appoint multiple NCII Leads per sector for flexibility. All appointed Leads will be publicly listed on the NACSA website.

Enforcement mechanisms and penalties

The Act applies to licensed cybersecurity service providers (CSSPs) that are designated as NCII entities and the penalties are substantial, including large fines and long imprisonment terms for noncompliance.

The key mechanisms used to ensure compliance and investigate violations are:

Duty to Provide Information Relating to NCII: NCII Entities must provide all requested NCII information to the Sector Lead, automatically report the acquisition of any new NCII, and notify the Lead of any material changes to the NCII’s design, configuration, security, or operation. Failure to comply with any of these duties carries a penalty of up to RM100,000 fine, two years imprisonment, or both.

Duty to Implement the Code of Practice: NCII Entities must implement the measures, standards, and processes specified in the Code of Practice. However, they may use alternative measures if they prove an equal or higher level of NCII protection. Failure to comply can result in a fine up to RM500,000, imprisonment up to ten years, or both.

Duty to Conduct Cybersecurity Risk Assessment and Audit: NCII Entities must conduct mandatory cybersecurity risk assessments (at least annually) and audits (at least once every two years). The results must be submitted to the Chief Executive. Failure to conduct these assessments or submit the reports can lead to a fine of up to RM200,000 or imprisonment for a term not exceeding three years, or both.

Duty to Notify Cyber Security Incidents: NCII Entities have a strict legal duty to immediately report cyber security incidents to the Chief Executive and their Sector Lead (with a detailed report required within a short timeframe, typically 6 hours for initial details). The initial notification should describe the cybersecurity incident, its severity, and the method of discovery. A full report must be submitted within 14 days, including details such as the number of hosts affected, information on the cybersecurity threat actor, and the incident’s impact. Noncompliance invites penalties of up RM500,000 or imprisonment for a term not exceeding ten years, or both.

Cybersecurity Incident Response Directive: Upon receiving a notification of a cybersecurity incident from an NCII Entity, the Chief Executive will investigate and may issue a directive on necessary measures to respond to or recover from the incident. The term “directive” underscores the importance of compliance. Failure to adhere to these directives may result in a fine of up to RM200,000 ringgit or imprisonment for a term not exceeding three years, or both.

Licensing: The CSA establishes a licensing regime for individuals and entities providing prescribed cybersecurity services. There are currently two categories of prescribed cyber security services: (i) managed security operation centre monitoring services; and (ii) penetration testing services. To obtain a licence, an application must be made to the Chief Executive with a prescribed fee and required documents (including qualifications and ID). Applicants must meet prerequisites set by the Chief Executive and have no convictions for fraud, dishonesty, or moral turpitude. The Chief Executive can approve the licence (with variable conditions) or refuse it (stating the grounds). Operating without a required licence is an offence. Providing or advertising services without a licence will incur a fine of up to RM500,000 or imprisonment up to ten years, or both. A fine up to RM200,000 or imprisonment up to 3 years, or both will be imposed for a breach of license conditions.

A broad extra-territorial scope: The CSA’s authority extends beyond Malaysia’s physical borders. The extraterritorial reach is particularly important for foreign companies that operate services or infrastructure in Malaysia, especially those designated as NCII Entities. If a foreign multinational company’s Malaysian subsidiary owns or operates NCII in Malaysia, the foreign parent company and its personnel can potentially face legal consequences under the CSA for offences or non-compliance related to that Malaysian NCII. Foreign-based CSSPs whose services (like managed security or penetration testing) affect NCII within Malaysia must also comply with the Act’s licensing requirements and standards.

Comparative Analysis with Singapore

Malaysia’s Cyber Security Act 2024 (CSA) is fundamentally like Singapore’s Cybersecurity Act 2018 (SG CA) – both are national laws designed to protect critical digital infrastructure. Both Acts establish a dedicated national agency with primary authority: the National Cyber Security Agency (NACSA) in Malaysia and the Cyber Security Agency in Singapore

While both Acts are primarily designed to protect infrastructure with critical information that is the NCII in Malaysia and the Critical Information Infrastructure (CII) in Singapore, the main differences lie in the severity of penalties, scope of regulation, and specific reporting requirements.

Malaysia’s penalties for non-compliance are generally harsher. For instance, our maximum fine is up to RM500, 000 and/or imprisonment up to 10 years for serious noncompliance (e.g., failure to report an incident or implement the Code of Practice). Singapore’s SG CA 2018 was less severe but its 2024 amendments have increased penalties, allowing for civil penalties up to S$500,000 (RM1,626,160) or 10 per cent of annual turnover for the entity, whichever is greater. However, the maximum penalty for certain core breaches (like failing an audit) in Singapore, is generally lower than Malaysia’s for similar offences.

Malaysia’s CSA also primarily focuses on criminal penalties (fines and/or imprisonment) for non-compliance while Singapore employs a flexible mix of civil and criminal penalties. The Cybersecurity Agency can pursue civil penalties instead of criminal ones for certain breaches.

In terms of the scope of incidence reporting, the CSA primarily focuses on incidents directly affecting the NCII entity itself. Singapore’s SG CA has a broader scope following its 2024 amendments, requiring CII owners to report incidents involving their third-party vendors and supply chains.

Malaysia’s CSA mainly focuses on regulating NCII Entities and CSSPs. The 2024 amendments to the SG CA expanded its regulatory scope to include new categories like: Foundational Digital Infrastructure (FDI) providers (e.g., cloud services and data centres, even if they do not directly own a CII), Entities of Special Cybersecurity Interest (ESCI) and Systems of Temporary Cybersecurity Concerns (STCCs).

The SG CA’s amendments also allow the Cyber Security Agency to regulate systems wholly located outside Singapore if the owner is in Singapore and the system provides an essential service to Singapore. The Singaporean amendment focuses on the location of the controlling entity (the owner/operator) and the impact of the service on Singapore. If a Singapore-based entity controls a system that is critical to Singapore’s essential services, that system is covered, even if it is physically entirely offshore. Whereas the CSA’s initial extraterritorial scope applies to NCII that is wholly or partly in Malaysia. In essence, the provision ensures that the law has the necessary power to protect Malaysia’s vital national functions from cyber threats, regardless of where the attacker or the negligent party is situated, if the affected critical system has a link to the country’s NCII entities. If a component or the operation itself is linked to Malaysia, it is covered.

In terms of similarities between the two Acts, owners and operators of the designated critical infrastructure must comply with similar core duties: conducting risk assessments and audits, adhering to Codes of Practice/Standards, and reporting cyber security incidents.

Both Acts establish a licensing regime for CSSPs to regulate the quality of services, especially those provided to critical sectors. Both laws have provisions for offences committed outside of their respective countries if those offences impact the nation’s critical infrastructure.

Do Malaysia’s cyber laws measure up to EU standards?

Malaysia’s CSA shares a strong resemblance with the European Union’s primary cybersecurity regulation, the Network and Information Security Directive 2 (NIS2).

NIS2 is the EU’s key framework for critical and important sectors; and significantly broadens the scope and imposes stricter requirements than the original NIS Directive.
The similarities between Malaysia’s CSA and the EU’s NIS2 are in their sector focus and core requirements, which both mandate risk management strategies, incident reporting and breach notification procedures, clearly defined governance roles, regular security audits and vulnerability assessments, and resilience testing to ensure readiness against threats.

NIS2 is mandatory across the EU and brings higher expectations — and penalties — than before. Noncompliance can lead to significant fines and even personal liability for company leadership. The significant difference between the CSA and the NIS2, is the personal liability that company leadership face in case of noncompliance.

The GDPR is the EU’s flagship regulation for data privacy and security. It has become the de facto global benchmark for privacy regulation, influencing new laws in countries across the world (including the recent amendments to Malaysia’s PDPA). It sets the standard for how organisations must handle personal data, regardless of whether they are based in the EU or simply processing data from EU residents. The Malaysian government’s 2024 amendments to the PDPA brings it closer to the standards of the GDPR, but key differences remain.

The scope of application of the GDPR is very broad and applies to personal data processing across all sectors, including commercial, non-commercial, social, and governmental activities (except where exempted). Whereas the Malaysian PDPA primarily applies to the processing of personal data in the context of “commercial transactions.” The Federal and State Governments are largely exempt.

The GDPR applies to all organisations—regardless of size or sector—that collect or process personal data of individuals in the EU. This includes companies based outside the EU if they target or track EU users (e.g. via websites, apps, or services).

While the PDPA also has an “extraterritorial effect” it applies to entities established outside Malaysia only if they use equipment in Malaysia to process personal data and those that use data processors in Malaysia. The PDPA does not apply to the Malaysian Federal Government, the State Governments, or any personal data processed outside of Malaysia unless it is intended for further processing in the country.

The GDPR sets a high standard for consent – it must be “freely given, specific, informed, and unambiguous”. Implied consent is considered insufficient. The PDPA only requires explicit consent for Sensitive Personal Data, but implied consent can be sufficient in some other cases.

Penalties for the GDPR can reach up to €20 million (RM97,798,000.00) or 4 per cent of the global annual turnover, whichever is higher. Beyond compliance, GDPR builds trust with customers and business partners through transparent data practices. Recent amendments (in 2024) have increased the maximum fine to RM1 million (approx. €200,000 to €250,000) and/or imprisonment. The key difference is that PDPA penalties are fixed monetary fines, not calculated as a percentage of a company’s global annual turnover.

While the PDPA is a strong domestic law that is actively evolving to be more compatible with the GDPR, particularly in areas like breach notification, data portability, and requirements for the Data Processing Officer (DPO), its penalties and scope remain less comprehensive.

Key Challenges and Opportunities in Malaysia

The CSA 2024 introduces significant changes that will have far-reaching implications for businesses operating in Malaysia, particularly those designated as NCII entities.

This could include increased costs, particularly in the areas of enhanced cybersecurity infrastructure, personnel, and potential penalties for noncompliance. This would involve upgrading existing systems, implementing new security protocols, and potentially hiring additional cybersecurity professionals. The requirement for regular risk assessments and audits will also incur ongoing costs.

Similarly, as Malaysia embarks on implementing data portability, the broad, non-sector-specific scope of these rights may challenge businesses across all industries, requiring them to develop secure processes and technologies, which could increase costs, especially for smaller enterprises.

On the flip side, the CSA also creates significant opportunities across the cybersecurity, technology, and professional services sectors with the explosion in demand for cybersecurity products and services across the 11 designated NCII sectors. It has created a high demand for qualified firms to conduct mandatory, periodic risk assessments, compliance audits, and gap analyses for hundreds of NCII entities, for purchasing and implementing security controls, software, and hardware to meet the new, stringent technical standards in the Codes of Practice. There will be an increased need for Managed Detection & Response (MDR) Services to ensure incidents are detected and reported to NACSA within the required short timelines. Finally, licensed providers gain a competitive edge and become the mandated choice for NCII entities seeking to outsource critical security functions.

Conclusion:

Malaysia’s CSA 2024 marks a significant step forward in strengthening the nation’s digital defences through a more coordinated national effort and aims to create a more secure digital environment for both local and international companies operating in Malaysia. Future legislative changes may continue this trend, potentially broadening the scope to include areas like Virtual Critical Information Infrastructure (CII). It signifies the country’s move from a largely voluntary and advisory approach to a mandatory, punitive, and focused regulatory framework for critical sectors.

However, businesses are still struggling with full execution, staff shortages, incident reporting hurdles, and disparate levels of preparedness. Feedback from early adopters (as reported in an article by Bank Info Security in September 2025) did raise questions about how much detail should go into six-hour incident reports, how severity thresholds should be defined and how to align overlapping obligations under the PDPA and CSA. Clearly, a considerable amount of work remains for businesses to grasp what compliance would mean in practice.

While recent laws provide a strong foundation, questions remain about Malaysia’s readiness to address emerging technologies through legislation. The current legal framework still lacks specific laws for Artificial Intelligence (AI) and quantum technology.
For AI, only voluntary, non-binding National Guidelines on AI Governance and Ethics (AIGE) exist, and the Digital Minister has noted existing general laws are inadequate for AI-driven cybercrime. Similarly, the exponential growth of IoT in smart cities, agriculture, transportation, and energy expands the attack surface, necessitating secure device design standards, continuous monitoring, and anomaly detection frameworks. Proactive regulation and industry collaboration will enable Malaysia to harness technological innovation while preserving cybersecurity integrity.

Meanwhile, specific, binding quantum cybersecurity laws remain under development. Although the CSA is a key step, the translation of domestic agreements into concrete, real-time mechanisms for cross-border cybersecurity collaboration and policy harmonisation is still a work in progress. Addressing these gaps will require targeted policies, added responsibilities to current agencies, or the creation of new departments.

Recommendations for stakeholders and policymakers

To further strengthen Malaysia’s cybersecurity posture, a concerted emphasis on public–private partnerships will be crucial. Such cooperation can foster information sharing, threat intelligence exchange, and coordinated incident response across sectors. Sector-specific cybersecurity forums, joint simulation exercises, and innovation incentive programmes can significantly enhance national cyber resilience. By cultivating trusted alliances that go beyond legislative mandates, Malaysia can better anticipate and mitigate the increasingly sophisticated threats confronting its digital economy.

Capacity building is also essential for Malaysia’s cybersecurity ambitions. The persistent shortage of qualified professionals impedes effective implementation of CSA requirements across both public agencies and private enterprises. Expanding cybersecurity education and training, introducing targeted scholarships, and developing a robust ecosystem of certification and professional development programmes are necessary to address the talent gap and equip future leaders with expertise in emerging threat domains such as AI-driven attacks and quantum computing risks, to ensure the long-term sustainability of Malaysia’s cyber defence capabilities.

As cyber threats are dynamic in nature, Malaysia’s cybersecurity governance must remain adaptive and forward-looking. Ongoing regulatory evolution is essential to address fast-changing technological landscapes—particularly around AI governance, IoT proliferation, and cloud security. Establishing a regulatory sandbox, encouraging innovation-friendly policies, and implementing periodic legislative reviews will help balance stringent security measures with flexibility for digital growth. This will ensure Malaysia remains agile, resilient, and recognised as a trusted digital hub in Southeast Asia and beyond.

Additional Outlook for Malaysia’s regulatory framework – what is in store

Just this month, Fintech News Malaysia, reported that to counter rising and increasingly sophisticated cybercrime, Malaysia is implementing a multi-pronged national strategy focused on structural and legal reform: at the core is the introduction of a comprehensive Cyber Crime Bill to replace outdated legislation, granting law enforcement the necessary legal strength to address complex digital crime and enhance national security. Furthermore, the NACSA is spearheading the creation of a new Centre for Cryptology and Cyber Security Development, which is envisioned as the national hub for advancing digital resilience and sophisticated cyber defences. Finally, to ensure a faster and more efficient response against scams, the National Scam Response Centre (NSRC) will be restructured under the Royal Malaysia Police (PDRM) to tighten coordination, accelerate incident handling, and streamline investigations.

Likewise, ongoing consultations on Data Protection Impact Assessments (DPIAs), Privacy-by-Design, and automated decision-making show that Malaysia is proactively addressing future technological challenges. These consultations are being led by the Personal Data Protection Department (PDPD) and are part of a broader effort to update the regulatory landscape following the Personal Data Protection (Amendment) Act 2024. By initiating public consultation on these advanced topics, Malaysia is effectively future-proofing its data protection laws to govern the ethical and secure use of emerging technologies.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

[Feature Article] The Star Newspaper: Making Malaysia’s AI Budget Deliver

Making Malaysia's AI Budget Deliver

Published by The Star on 13 Oct 2025

by Thulasy Suppiah, Managing Partner

Budget 2026 unequivocally signals Malaysia’s all-in strategy on Artificial Intelligence, positioning it as a core pillar of our national future. The financial commitments are broad and substantial, spanning a nearly RM5.9 billion allocation for cross-ministry research and development, a RM2 billion Sovereign AI Cloud, and various funds to spur industry training and high-impact projects. This ambition is commendable, but ambition, even when well-funded, is no guarantee of success. The critical question now shifts from “what” to “how,” and it is in the execution where our grand vision will either take flight or falter.

A central pillar of our AI strategy is the National AI Office (NAIO), and its RM20 million allocation is a welcome start. The challenge ahead is not a lack of commitment from our various ministries and agencies, which are already pursuing valuable AI initiatives. Rather, it is the risk of fragmentation. To transform these individual efforts into a powerful, cohesive national programme, NAIO’s role must evolve beyond coordination to strategic command. This does not mean replacing the excellent work being done, but empowering NAIO with a cross-ministry portfolio view to prevent redundancy, harmonize standards, and ensure every ringgit of public funds is maximized. By creating a central registry of government AI projects and a single outcomes framework, we can amplify the impact of each agency’s work, ensuring that parallel efforts are converted into a unified, national success story.

Similarly, the budget’s emphasis on talent development is rightly placed. But training more AI graduates is only half the equation; we must ensure our industries are ready to integrate them effectively. Simply funding courses is not enough. We should consider making training grants conditional on tangible outcomes: verified industry placements for graduates, a focus on open, cross-platform tools to avoid proprietary lock-ins, and requirements for short, in-situ implementation cycles with documented results. This ensures we are building a workforce for the real world, not just for the classroom.

The budget’s focus on sovereignty, marked by the launch of the ILMU language model and the Sovereign AI Cloud, is a laudable inflection point. But true sovereignty is not merely about where data resides; it is about who sets the algorithmic and access rules that govern it. The devil, as always, lies in the details. Who will decide which datasets are hosted? How will compute resources be priced for local firms? And most importantly, what are the adoption mechanisms that will compel ministries and SMEs to actually use it? Without clear answers and a robust adoption strategy, even a sovereign cloud risks becoming an impressive but idle monument—a white elephant of good intentions.

One of the budget’s most prescient moves is tasking MIMOS with deepfake detection. This is not a trivial matter; it is a direct response to a clear and present threat. Over the past three years, authorities have had to request the takedown of over 40,000 pieces of AI-generated disinformation. The shocking case in Kulai, where a student allegedly used AI to create explicit deepfakes of schoolmates, brings this danger into sharp focus. This initiative is a crucial and necessary step towards safeguarding our national security and public safety.

Budget 2026 has laid the financial groundwork. It has signaled our intent to the world. If Malaysia is to truly become an AI nation by 2030, the focus must now pivot from macro announcements to micro-implementation. The next budget must not only allocate for global data centres and grand projects, but for the hard, unglamorous work of driving local AI adoption across our SMEs and public services. That is the true measure of a national programme.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: AI, Tenders, and the Trust Deficit

AI, Tenders, and the Trust Deficit

Published by The Star on 26 Sep 2025

by Thulasy Suppiah, Managing Partner

Around the world, the conversation about Artificial Intelligence in public procurement is dominated by the promise of efficiency. The focus is on streamlining processes, automating tasks, and achieving significant cost savings. Studies, such as a recent one by Boston Consulting Group, project remarkable outcomes like up to 15% in savings and a significant reduction in human workload. Yet, in our Malaysian context, to focus solely on these benefits would be to miss a far more critical opportunity: leveraging AI as a frontline tool in the battle against corruption.

The timing could not be more urgent. The recent MACC revelation that Malaysia lost RM277 billion over six years, much of it through collusion in public tenders, is a stark reminder of the deep-seated challenge we face. As we grapple with this reality, the small nation of Albania has embarked on a controversial experiment. Faced with its own entrenched corruption, its government has appointed an AI digital assistant to oversee its entire public procurement process, hoping to create a system free of human bias and graft—a move now facing intense scrutiny from technical and legal experts.

The potential benefits of deploying such technology in Malaysia are immense. Imagine an AI system as an incorruptible digital auditor, capable of analyzing thousands of bids simultaneously. It could flag suspicious patterns invisible to the human eye—interconnected companies winning contracts repeatedly or bids that are consistently just below the threshold for extra scrutiny. By ensuring every decision is data-driven and transparent, we could theoretically restore fairness, save billions in public funds, and begin to rebuild the deep deficit of public trust.

However, recent developments show we must proceed with extreme caution. Experts are now questioning the entire premise of an “incorruptible” AI, pointing out that any system is only as good as the data it is fed. As one political scientist warned, if a corrupt system provides manipulated data, the AI will merely “legitimise old corruption with new software.” This also raises a critical question of accountability—an issue so serious it is being challenged in Albania’s Constitutional Court. If a machine makes a flawed decision, who is responsible?

The most prudent path for Malaysia, therefore, is likely not the appointment of a full “AI minister.” Instead, we should explore a more pragmatic, hybrid model. Let us envision AI not as a replacement for human decision-makers, but as a powerful, mandatory tool to support them. Our MACC, government auditors, and procurement boards could be equipped with AI systems designed to act as a first line of defense. This “digital watchdog” could flag high-risk tenders for stringent human review, catching cases that might otherwise be missed due to simple human oversight or inherent bias. Furthermore, its data-driven recommendations would serve as objective evidence of impartiality, making it much harder for legitimate cases to be dismissed due to personal or political agendas.

The unfolding experiment in Albania, with all its emerging challenges, has opened a vital, global conversation. For a nation like ours, which has lost so much to this long-standing problem, ignoring the potential of technology to enforce integrity is no longer an option. It is time to seriously innovate our way towards better governance.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Key Trends in Medicine: AI Powered Healthcare Innovations

Key Trends in Medicine: AI Powered Healthcare Innovations

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Introduction

A shortage of 11 million healthcare workers is expected by 2030, the World Economic Forum reports, but it is hopeful that advances made by artificial intelligence (AI) in healthcare will help bridge that gap. With its ability to ease tasks, summarise large data sets, reduce time and achieve higher accuracy than humans, it is indeed a wonder that adoption of AI by the healthcare sector remained for a long time “below average”. However, as AI gets smarter, and learns better, more and more spaces in healthcare are bowing to automation. Here are some areas in healthcare that are benefitting from the latest AI and digital learning (DL) applications.

Precision Diagnosis

For strokes caused by a blood clot, time is of essence. Doctors would want to know the initial onset time to determine the right treatment.


Researchers from Imperial College London, the University of Edinburgh, and Technical University of Munich have enhanced stroke timing estimation using AI. They trained the algorithm they developed on a dataset of 800 brain scans with known stroke times, allowing the model to independently identify affected regions in CT scans and estimate stroke timing.


The team then tested the algorithm on data from almost 2,000 other patients. The software proved to be twice as accurate as using a standard visual method. The algorithm also excelled in estimating the “biological age” of brain damage, indicating how much the damage has progressed and its potential reversibility.


The research study leader, Dr. Paul Bentley from Imperial College London said, the accuracy of this data will help doctors make emergency decisions to administer the best response in stroke patients.

Higher Accuracy

Healthcare powered by data and smart automation is also helping to reduce misdiagnosis.
Among the most common mistakes made at accident and emergency (A&E) units in the UK, are that as many as 10 per cent of fracture cases are either overlooked or diagnosed late by medical professionals.

This could lead to further injury or harm to the patient, worsening their condition, delaying treatment, and making it harder for hospitals to quickly treat and turnover patients.
The National Health Service (NHS) in the UK has now been given the green light by the National Institute for Health and Care Excellence (Nice) to use AI as a way of improving fracture detection when examining X-rays.
Clinical evidence suggests that using AI may improve detection in scans, compared with a medical professional reviewing on their own, “without increasing the risk of incorrect diagnoses”, Nice reportedly told The Guardian.

Nice says the technology is safe, reliable and could reduce the need for follow-up appointments.

AI-powered Assistance

Imagine if you could avoid long wait hours in crowded rooms just to have your healthcare questions answered by a doctor. How helpful would it be to minimise the number of times you had to pay for ever increasing clinical consultation costs?

AI virtual assistants are the saviour both overworked clinicians and hospital staff as well as anxious patients have been waiting for. They are AI-powered apps that chat with patients, clinicians, and staff by voice or text.

Digital assistants speed up triage, answer patient questions, schedule appointments, and automate repetitive tasks – traditionally tasks that required many hands and great effort. It can even help explain lab results. This frees staff to focus on care, cuts down wait time, and checks costs.

Virtual assistants can present as chatboxes on hospital websites, voice hubs at nursing stations, or prompts on tablets in waiting rooms. In an AI powered chatbox, a patient with an inflamed toe might type in their symptoms, and the assistant flags any danger signs (like a high fever) before suggesting home care or a quick clinic visit. On the admin side, digital assistants sort schedules, handle billing questions, and coordinate referrals.

That the global AI virtual assistant market in healthcare reached USD677.93 million (RM 2,869 million) in 2023 and is estimated to hit USD9295.63 million (RM39339.11 million) by 2030, is testament to its need and demand.

Machine Learning Applications

For many chronic diseases, by the time they present symptoms and the individual goes to the doctor because of an ailment or visible observations, it is often too late.

A new AI machine learning (ML) model can detect the presence of certain diseases before the patient is even aware of any symptoms, according to its maker AstraZeneca.

Using medical data from 500,000 people who are part of a UK health data repository, the machine could predict with high confidence a disease diagnosis many years later.

Slavé Petrovski, who led the research, told Sky News: “We can pick up signatures in an individual that are highly predictive of developing diseases like Alzheimer’s, chronic obstructive pulmonary disease, kidney disease and many others,” he said.

Another example where machine learning has made great strides is a technology developed by IBM Watson Health and Medtronic to continually analyse how an individual’s glucose level responds to their food intake, insulin dosages, daily routines, and other factors, such as information provided by the app user.

For example, are certain foods worsening the patient’s glucose control? Are there particular days or times where a person’s glucose goes high or low? The Sugar.IQ diabetes management application (App) leverages AI and analytic technologies to help people with diabetes uncover patterns that affect their glucose levels. This allows them to make small adjustments throughout the day to help stay on track.

Sugar. IQ provides information that show how lifestyle choices, medications, and multiple daily injections impact diabetes management and the time spent with glucose in the target range. It provides individualised guidance in understanding and managing daily diabetes management decisions, so that people on multiple daily insulin injections have more freedom to enjoy life.

Idiopathic Pulmonary Fibrosis (IPF) is a severe, chronic lung disease that progressively impairs lung function. It affects approximately five million people worldwide with a median survival of only three to four years. Available treatments can only slow its progression, and are unable to halt or reverse the disease.

AI significantly accelerated the drug discovery process for IPF and reduced the timeline from target identification to preclinical candidate selection to just 18 months – a major advancement in the efficiency of pharmaceutical research.

Insilico Medicine used AI-driven algorithms to design Rentosertib to treat IPF. It is the first AI-designed drug – where both the biological target and the therapeutic compound were discovered using generative AI.

Insilico Medicine is now engaging with global regulatory authorities to proceed with further trials aimed to evaluate Rentosertib’s efficacy and expedite its path to regulatory approval. If successful, Rentosertib could become the first AI-discovered therapy to reach patients, potentially transforming the treatment landscape for IPF.

AI is transforming drug discovery, delivery and administration. AI-designed drugs show 80-90 percent success rates in Phase I trials compared to 40-65 percent for traditional drugs. AI based tools such as ML and DL reduce development timelines from more than 10 years to potentially 3-6 years and cut costs by up to 70 percent through better compound selection.

Assisting in Surgical and clinical procedures

It may be too soon to speak of robots performing all the procedures in a surgery, but in operating theatres, AI and robotics are already assisting surgeons to handle surgical instruments, enhance precision, reduce invasiveness, and improve patient recovery.

The emergence of deep neural networks associated with modern computational power has produced reliable automation of certain tasks in medical imaging, including time-consuming and tedious workflows such as organ segmentation. Segmentation produces measurements and automatic extraction of quantitative features, which cannot be performed in everyday clinical practice.

In aortic and vascular surgery clinics, for instance, challenges existed during routine clinical follow-up for abdominal aortic aneurysms (AAAs). Longitudinal comparison of diameter measurements across consecutive tomography angiography (CTA) exams was cumbersome. It required the recall of multiple prior exams from the picture archiving and communication system of the hospital, measuring them, and comparing measures.

Augmented radiology for vascular aneurysm (ARVA) was designed to include automatic fetching of prior CTAs for separate analysis and automatic longitudinal comparison of each aortic segment. The use of cloud-based computing services enables processing of the multiple CTA data sets and the secure return of the report back to the hospital network within minutes. In the hospital, these reports are then automatically identified and placed into the patient’s hospital file or in any review workstation. This saves substantial time in everyday aortic clinic processes.

Early detection of epidemics and its spread

AI and ML technologies can also forecast the onset of certain epidemics and track their global distribution using historical data that is available online, satellite data, current social media posts, and other sources. ProMED-mail, a reporting tool that operates online and keeps track of epidemic reports from around the world, will likely be the best example of a monitor to help check an epidemic before it causes significant harm.

Operation Optimisation of Healthcare systems

According to the National Library of Medicine, a typical nurse in the US devotes 25 per cent of her working hours to administrative and regulatory tasks. Technology may easily replace these tedious operations. Today, hospitals are using AI to predict peak times, improve bed management, and enhance staff scheduling for optimised resource allocation. For example, one hospital used AI-driven predictive models to adjust staffing based on patient volume, reducing wait times and improving patient throughput.

AI models are also being used in emergency departments to predict patient admission rates, reducing bottlenecks and improving care delivery. By forecasting the number of patients arriving at the ED, hospitals can optimise their staff allocation, reduce patient wait times, and provide faster care.

It’s not tech vs. human

While AI is making great inroads in healthcare, the complete replacement of medical professionals in medicine is still a long way off. The need for human interaction in healthcare is likely to keep AI on the sidelines as a complement, rather than a substitute, for doctors.

The Medical Futurist put forward five fundamental reasons why AI won’t replace doctors – and never will.

  • Empathy – A doctor-patient relationship is built on empathy and trust; and listening and responding in a way that helps the patient feel understood. Very few people are likely to trust an algorithm with life-altering decisions. These are qualities that cannot be fully replicated by artificial intelligence.

  • Physicians have a non-linear working method to arrive at a diagnosis – no algorithm or robot can have the creativity and problem solving skills required to arrive at a diagnosis.

  • Complex digital technologies require competent professionals – It is more worthwhile to programme AI with those repetitive, data-based tasks, and leave the complex analysis/decision to the complex human brain.

  • There will always be tasks robots and algorithms cannot perform – like the Heimlich maneuver.

  • It has never been tech vs. human – the goal has always been to use tech to help humans.

Ethical and Regulatory Considerations

Regulating AI in the healthcare sector is proving to be a complex and sensitive challenge. While the benefits of software as a medical device (SaMD) are great, patients still need protection from defective diagnosis, unacceptable use of personal data and bias built into algorithms.

The growing integration of AI and ML in drug development demands proactive management of ethical and regulatory challenges to ensure safe applications.

In response, regulatory bodies like the United States Food and Drug Administration and the European Medicines Agency are actively developing AI safety parameters and promoting diverse population validation, informed by detailed regulatory guidelines for robust, ethical AI technologies.

The FDA’s AI/ML SaMD Action Plan focuses on regulating software as a medical device:

  • Predetermined Change Control Plan (PCCP): Allows for modifications to AI/ML software over time, ensuring continuous monitoring and updates while maintaining safety and effectiveness. The basic idea is that as long as the AI continues to develop in the manner predicted by the manufacturer it will remain compliant. Only if it deviates from that path will it need re-authorization.

  • Good Machine Learning Practices (GMLP): Guidelines to evaluate and improve machine learning algorithms for medical devices.

  • Transparency: Efforts to ensure clear communication about AI-enabled devices to patients and users.

In the United Kingdom, the Regulatory Horizons Council of the UK, which provides expert advice to the UK government on technological innovation, published “The Regulation of AI as a Medical Device” in November 2022. This document considers the whole product lifecycle of AI-MDs and aims to increase the involvement of patients and the public, thereby improving the clarity of communication between regulators, manufacturers, and users.

The National Medical Products Administration (NMPA) of China, which provides regulatory oversight on medical products, published the “Technical Guideline on AI-aided Software” in June 2019. This guideline highlighted the characteristics of deep learning technology, controls for software data quality, valid algorithm generation, and methods to assess clinical risks.

Then in July 2021, the NMPA released the “Guidelines for the Classification and Definition of Artificial Intelligence-Based Software as a Medical Device”, which includes information on the classification and terminology of AI-MDs, the safety and effectiveness of AI algorithms, and whether AI-MDs provide assistance in decision making such as clinical diagnosis and the formulation of patient treatment plans.

Later, in 2022, the Centre for Medical Device Evaluation under the NMPA published the “Guidelines for Registration and Review of Artificial Intelligence-Based Medical Devices”. These guidelines provide standards for the quality management of software and cybersecurity of medical devices taking into consideration the entire product’s lifecycle.

Perhaps the European Union’s AI Act has provided the most stringent standards for regulating SaMDs.

Under the Act, AI systems such as those in AI/ML-enabled medical devices, are classified as “high-risk”. This is the highest risk classification for permitted uses of AI which triggers a cascade of compliance requirements Risk management is the focal point, and is intertwined with the EU MDR risk-management system to identify, evaluate, and mitigate the ‘reasonably foreseeable risks’ that high-risk AI systems can pose to health, safety, or fundamental rights such as privacy and data protection.

The EU AI Act’s extra-territorial reach is akin to the EU General Data Protection Regulation (GDPR), transcending European borders and impacting international AI system providers and deployers. It applies to ‘providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country’ and providers and deployers established outside the EU if ‘the output produced by the system is used in the EU.

Whether any of these regulatory frameworks will actually ensure public trust and compliance while still fostering innovation will depend very much on continuous monitoring and engagement with feedback from all stakeholders including scientists, doctors and patients.

Regulations should be robust and allow for continuous improvement to ensure it achieves its intended purpose.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

[Feature Article] NST & The Star Newspaper: AI’s New Watchdog Role: A Necessary Evil or a Step Too Far?

AI's New Watchdog Role: A Necessary Evil or a Step Too Far?

Published by New StraitsTimes and The Star on 11 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent disclosure by Open AI that it is scanning user conversations and reporting certain individuals to law enforcement is a watershed moment. This is not merely a single company’s policy update; it is the opening of a Pandora’s box of ethical, legal, and societal questions that will define our future relationship with artificial intelligence.

On the one hand, the impulse behind this move is tragically understandable. These powerful AI tools, for all their potential, have demonstrated a capacity to cause profound real-world harm. Consider the devastating case of Adam Raine, the teenager who died by suicide after his anxieties were reportedly validated and encouraged by ChatGPT. In the face of such genuine, actual harm, the argument for intervention by AI operators is compelling. A platform that can be used to plan violence cannot feign neutrality.

On the other hand, the solution now being pioneered by an industry leader is deeply unsettling. While OpenAI has clarified it will not report instances of self-harm, citing user privacy, the fundamental act of systematically scanning all private conversations to preemptively identify other threats sets a chilling, Orwellian precedent. It inches us perilously close to a world of pre-crime, where individuals are flagged not for their actions, but for their thoughts and words. This raises a fundamental question: where do we draw the line? Should a user who morbidly asks any AI “how to commit the perfect murder” be arrested and interrogated? If this becomes the industry standard, we risk crossing over into a genuine dystopia.

This move is made all the more problematic by the central contradiction it exposes. OpenAI justifies this immense privacy encroachment as a necessary safety measure, yet it simultaneously presents itself as a staunch defender of user privacy in its high-stakes legal battle with the New York Times. It cannot have it both ways. This reveals the untenable position of a company caught between the catastrophic consequences of its own technology and a heavy-handed response that flies in the face of its public promises—a dilemma that any AI developer adopting a similar watchdog role will inevitably face.

We are at a critical juncture. The danger of AI-facilitated harm is real, but so is the danger of ubiquitous, automated surveillance becoming the norm. This conversation, sparked by OpenAI, cannot remain confined to the tech industry and its regulators; it is now a matter for society at large. We urgently need a broad public debate to establish clear and transparent protocols for how such situations are handled by the entire industry, and how they are treated by law enforcement and the judiciary. Without them, we risk normalizing a future governed by algorithmic suspicion. This is a line that, once crossed, may be impossible to uncross.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Charting a Sustainable Course for Johor’s Data Centre Boom

Charting a Sustainable Course for Johor's Data Centre Boom

Published by The Star on 9 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent stop-work order issued to a data centre project in Iskandar Puteri marks an important inflection point for Johor. Rather than viewing it as a setback, we should see it as a natural consequence of success—a sign that Johor’s ambition to become a regional digital powerhouse is rapidly becoming a reality, and a prompt for us to thoughtfully consider the path ahead.

The state government’s efforts in attracting these high-value investments are commendable, and the scale of development is truly significant. With 13 data centres already operational and another 15 currently under construction in Johor, it is clear these facilities are a cornerstone of the Digital Johor agenda and the Johor-Singapore Special Economic Zone. They promise to create thousands of skilled jobs, spur technological innovation, and solidify Malaysia’s position on the global stage. This economic momentum is vital and should be nurtured.

However, this commendable success naturally brings with it new responsibilities. The concerns raised by the local community in Iskandar Puteri—from environmental disruption to late-night construction—highlight the critical need to create a symbiotic relationship between these large-scale developments and the communities they inhabit. The challenge, therefore, is not one of ambition, but of integration and balance.

In navigating this, we can learn from the diverse experiences of other nations. Ireland, for example, demonstrates the potential pitfalls when infrastructure development and energy planning do not keep pace with the industry’s rapid growth. Its data centres now place significant strain on the national power grid, raising public concerns about energy security and climate goals. On the other end of the spectrum, Amsterdam faced hard physical limits on its land and power grid, forcing a difficult choice to pause new development to prioritize other urban needs.

A more strategic benchmark might be Singapore. After its own moratorium, Singapore re-engaged the data centre market with a clear focus on quality over quantity. By implementing stringent energy efficiency standards, it has strategically positioned itself as a premium destination for best-in-class operators who are aligned with sustainability goals. This approach proves that strong environmental governance can be a powerful competitive advantage, attracting responsible, long-term investment.

For Johor and Malaysia, this moment presents an opportunity to architect a sustainable roadmap for our digital future. The goal should not be to slow down growth, but to steer it in a direction that is both economically prosperous and socially responsible. The government can lead the way by proactively engaging with the developers of all current and future projects, ensuring that clear guidelines for sustainable and community-centric development are understood and implemented from the outset.

By doing so, we can build confidence among both investors and the public. Let us use this opportunity to pioneer a balanced model for data centre development—one that harnesses their immense economic potential while safeguarding our environmental heritage and enhancing the well-being of our communities. This is how we can secure our position not just as a digital hub, but as a model for sustainable digital transformation.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

You Are the Product: How Targeted Ads Became the Most Powerful Tool of Influence in the Digital Age

You Are the Product: How Targeted Ads Became the Most Powerful Tool of Influence in the Digital Age

“It’s not just what you buy — it’s what you think, fear, and believe. And someone paid to shape it.”

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Introduction: The Hidden Power of the Ad Box

Ads used to sell shoes. Now they sell narratives. The practice of highlighting the features and benefits of products and services to a mass audience has evolved. Enter the age of programmatic marketing where big data, and particularly our data has reshaped how and why advertisers target you and I. Today’s stories are curated to reach us based on our emotional vulnerabilities and individual interests. It reaches us through our personal devices and social media platforms the moment we click on something online. These narratives could be overt or covert, but highly personalised based on analyses of our personal demographics and online footprint, making today’s advertisements a precise and potent tool of influence or exploitation.

In Malaysia, numerous charlatans used Artificial Intelligence (AI) and deepfake to manipulate the image of Datuk Siti Nurhaliza, a local artist with a massive following, to market fraud investments. They also misused the brand identities of trusted online media portals (like The Star and Free Malaysia Today) to scam her followers. One fraudster was even able to imitate her voice and generate fake video calls to tug at the heartstrings of fans, inviting them to invest in the same platform as her.

While the use of big data, visual media and social media platforms to sell narratives have revolutionised branding, there is a dark side to how personal data is being used to psychologically tune and manipulate consumers’ vulnerabilities. On the one side, organisations are under pressure to acquire increasingly detailed information about their consumers, on the other end, ad fraudsters are stealing this information to unethically benefit themselves.

The New Advertising Industrial Complex

Unlike traditional marketing, programmatic advertising relies on real-time insights of consumer online behaviour and interests, to automate precise advertisement space buying on a large scale. Using consumer’s personal information, advertisers are able to get the right brand in front of the right audience at the right time, within seconds. Such software, known as ad-tech (advertising technology) or supply side platform (SSP), can apparently access thousands upon thousands of publishers’ (owners or managers of websites with ad space to sell) sites at once to sell advertising space to the highest bidder.

Here’s what’s happening at the blink of an eye, behind the scenes during each programmatic advertising auction:

Targeting

  • When I visit a website, the publisher’s platform puts the ad space up for grabs. At the same time, the ad-tech software leverages my activity data to match the most suitable ads.

Bidding

  • In milliseconds, the software automatically calculates and places real-time bidding (RTB) for that ad spot based on all the data-surveillance they have derived about me.

Ad Serving

  • The advertiser with the highest bid wins! Their ad instantly appears on my screen.

Optimisation

  • With every impression, advertisers gather performance data to optimise future bids and improve targeting.

All of the above happens within seconds. While advertisers were initially enchanted, the increased dominance of ad space by just a few ad-tech companies raised concerns. Alphabet (Google’s parent company), Amazon and Meta control more than half (55 per cent) of global advertising spend outside China this year, according to Warc’s latest Q2 2025 Global Ad Spend Forecast.

This over-dominance allows Big Tech companies to raise prices, control transparency and what we see online, and limit opportunities to ad space bid winners. But companies are fighting back. Now, ad buyers are looking for SSP’s or ad tech companies that can benefit them in a positive way. Before they sign with a programmatic marketplace operator, they ask a critical question: How much access will their company have to quality ad inventory — and how much exposure do they have to the junk? SSP’s are now under pressure to provide more transparency and accountability, all detailed through structured contracts.

Data Extraction as Default

Every single moment, Apps, social media platforms, our devices and the websites we visit, are gathering data about our online visits, how much time we spend there and the type of device or browser we use. It saves our preferences and personal information, notes our location and what we’ve left in our online shopping cart, then shows us personalised content based on all this data.

Our online activity is usually tracked with a cookie or pixel which identifies us even after we leave the site. Our activity can also be tracked over different internet-connected devices, like our laptop and smartphone.
According to a 2022 study by cybersecurity company NordVPN, on average, a website has 48 trackers. Some sites sell this data to third parties (like Google). Information collected is used to serve more targeted and intrusive ads; some that follow us from website to website.

When a website we visit tracks us, that’s first-party tracking. When a website we visit lets another company track us, that’s third-party tracking.

Third-party tracking companies can track us across most websites visited. For example, if I visited a website about a country I wish to travel to, I might almost immediately see ads suggesting hotel accommodation options while visiting other websites.

Tracking our online footprint has become the default setting, and our consent is often buried deep in fine print. In 2022, NordVPN found that around 30 per cent of third-party trackers belong to Google, 11 per cent to Facebook, and 7 percent to Adobe. As of 2025, Google still has the biggest share of trackers. Thankfully, on the other extreme, several browsers are actively combating third party cookies. Brave, Firefox and Safari have blocked third party cookies by default since 2019, to make our online life more private.

Brave is also the only browser that offers to randomise fingerprint information. Digital fingerprinting is a method to build a profile of me or you based on our system configuration. It can include information about our browser type and version, operating system, plug-ins, time zone, language, screen resolution, installed fonts, and other data. Even when third-party cookies are turned off, sites can still identify us through fingerprinting – a more worrisome concern as this function cannot be removed. Even if we delete our cookies, we can be recognised through our digital footprint.

In 2024, Google announced it will no longer phase out third-party cookies in Chrome. However, it will allow users to make informed choices about their web browsing privacy. Overall, there seems to be pressure for the tracking landscape to change, and hopefully this translates to safer online browsing for all.

Targeted Ads vs Targeted Harm

According to Forbes Magazine, advertisers know that 91 per cent of consumers are more encouraged to purchase when a brand personalises its communication with them. So they build their messaging based on an audience’s demographics — who they are, what they like, where they are located and what they are most likely to purchase. There are key benefits to this approach. It is effective to market products to those most likely to buy them.

For example, let’s say my dad has just retired and is keen to pick up diving. As he searches online to facilitate this new hobby, a retargeting campaign would suggest safety gear, resorts for the best diving experience, diving coaches or a local diving community – most of which turn out to be extremely helpful and provide value to my dad. He might also end up supporting a remote but extremely gifted maker of diving suits.

While targeted ads are the smartest spend in marketing, they can put consumers at risk when the targeting becomes predatory. Scammers can buy our personal data and use it for purposes more devious than targeted ads and advertising campaigns.

  • Financial ads targeting the poor

In 2013, the US Senate Commerce Committee found that data brokers were targeting poor consumers by grouping them based on their financial vulnerability. Among terms used to categorise the poor into subsets were: “Zero Mobility”, “Burdened by Debt – Singles”, “Hard Times”, “Humble Beginnings”, “Very Elderly”, “Rural and Barely Making it”. This data was then used by unscrupulous parties to market risky financial products or illegal loans with high interest rates to those who could least afford them.

What began as personalisation becomes profiling — and often, exploitation.

While some data brokers prohibit customers from misusing personal information to sell debt-related products, there is lack of industry oversight to enforce these contract terms.

  • Investment scams preying on individuals looking for high returns

In 2024, social media scams in Malaysia continued to be a significant issue. The Securities Commission Malaysia (SC) identified social media platforms such as Facebook and messaging apps like Telegram as primary channels for online investment scams. Victims were targeted with unlicensed products and services. In 2024 alone, the Royal Malaysia Police’s Commercial Crime Investigation Department recorded 35,368 online scam cases, resulting in RM1.6 billion in financial losses—accounting for 84.5 per cent of all commercial crimes reported during the year.

There has also been the increased use of deep fake technology to impersonate influential figures such as Datuk Siti Nurhaliza to draw fans into investment scams.

  • Discriminatory ads

A study released in 2019 entitled Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes revealed how the Facebook algorithm could skew the delivery of ads for employment and housing opportunities “along gender and racial lines”, which violates antidiscrimination laws.

  • Predatory ads

Predatory programmatic advertising refers to the unethical or illegal use of automated ad buying and placement techniques to exploit vulnerabilities in individuals. For instance weight-loss ads which target young users, and cosmetic procedures which target women.

  • Filter bubbles

Every day, the content we see and engage with online is increasingly personalised to our interests, preferences and demographic information. Google for instance is excellent in customising our search results based on our location information or past search history. Facebook does the same thing for our News Feed, by analysing which posts, friends, and pages we interact with the most to boost content they believe we will likely engage with. An example of content personalisation and targeted advertising taken to the extreme is the Cambridge Analytica (CA) scandal where millions of US-based voters were targeted for disinformation campaigns and, to some extent, to influence the outcome of the 2016 US election.

When the Ad Becomes the Story

Ads now blur into content itself.

  • Social Media Influencers or Key Opinion Leaders

Indeed the rise of content creation by social media influencers (SMIs) has transformed brand marketing. Influencers who generate attractive content and who are themselves attractive, are highly sought after by brands for paid partnerships. In Malaysia, influencer marketing is particularly effective due to the country’s high social media usage (nearly 90 per cent of the total population of 31 million). According to an article by Bernama, 75 per cent of Malaysians make purchases based on influencer recommendations.

One of the key strengths of Malaysian influencers is their ability to engage authentically with their followers. According to an article by Statista, Malaysian consumers, especially younger audiences, prefer influencers who present relatable and genuine content. This evolving landscape is reshaping brand partnerships, urging companies to focus on authenticity and meaningful interactions to resonate with their target demographics.

The influencer advertising market in Malaysia is projected to grow by 10.79 percent (2024-2028) resulting in a market volume of USD102.30 million (RM431.8 million) in 2028.

Some social media influencers, while endorsing brands, also use their online platforms to promote good causes. Nandini Balakrishnan, a SAYS video producer, is known for promoting body positivity, while Deborah Henry (a Malaysian model, emcee and TV/podcast host) has been highlighting the plight of refugees for over 10 years. Through her influence, she co-founded Fugee.org, a non-profit that helps refugees living in Malaysia through education, advocacy and entrepreneurship.

Unfortunately, there are downsides to online influencers. A recent study by the University of Portsmouth examined the negative impacts some influencers have. The study found that some SMI’s endorse unhealthy or dangerous products such as diet pills, detox teas, and alcohol without full disclosure. Others spread misinformation, encourage unrealistic beauty standards, foster a comparison culture, promote deceptive consumption, and cause privacy risks.

The study found that the use of filtered and curated images by SMIs added to body dissatisfaction, low self-esteem and harmful beauty practices. It also found that influencer-driven content fuelled lifestyle envy and social anxiety, leading to negative self-comparison and diminished wellbeing.

Dr Georgia Buckle, Research Fellow in the School of Accounting, Economics and Finance at the University of Portsmouth, said: “Social media influencers hold immense power over consumer decisions and cultural norms. While they provide entertainment, inspiration, and brand engagement, the unchecked influence of some SMIs can lead to serious ethical and psychological consequences. Our study highlights the urgency for both academic and industry stakeholders to address these challenges proactively.”

According to a study done by Noémie Gelati and Jade Verplancke from Linköping University in Sweden, consumers identify and create links with influencers, driving them to follow influencers’ recommendations. This relationship impacts young consumers on a different level due to their immaturity and lack of understanding about marketing. The study noted that those around 19-24 years old are more prone to follow influencers “Indeed, (young) followers tend to purchase what the persons they idealise use or wear… Clothing, make-up and even cosmetic surgery, followers aspire to look like their favourite influencers and the beauty ideal they diffuse.”

Finally, influencers themselves are often under immense pressure to produce captivating content which strikes a delicate balance between authenticity and market appeal. This requires a lot of thought and special skill, often leading to stress and burnout. Additionally, the work requires them to maintain a certain public image, increasing the strain to their mental well-being.

  • Brand memes

Corporate memes are another powerful tool for brands looking to connect with younger demographics in a more casual and relatable way. According to Twitter, tweets with images receive 150 per cent more retweets than text-only posts, while meme content specifically tends to generate 60 per cent higher engagement rates compared to standard branded content.

  • Algorithms prioritise ‘engaging’ content

Social media algorithms use engagement, relevance, and user behaviour to determine which posts appear in our feeds. High engagement signals that content is valuable, increasing its visibility. While these systems are designed to enhance the user’s experience and engagement, they often unintentionally create an echo chamber. Users who follow unethical influencers can end up seeing more unethical or misleading content. Some algorithms can amplify extremist propaganda and polarising narratives. These amplifications can lead to societal divisions, promote disinformation, and bolster the influence of extremist groups. Often these types of content use emotionally provocative or controversial material and by focusing on metrics such as “likes” and “shares”, algorithms create feedback loops that take users down a rabbit hole.

AI: The Engine Behind the Curtain

AI is no longer just a backend efficiency tool — it is the central nervous system of modern advertising.

  • Machine learning determines which ad you see and when.
  • Reinforcement learning constantly tests variations to see what you click, skip, or share.
  • Generative AI personalises ad copy, images, and tone in real-time based on your digital behaviour.
  • Platforms use AI-driven predictive models to infer your mood, political leanings, spending habits — even when you’re most likely to be impulsive.
  • Instead of marketers carving out segments they think are best for an ad campaign, the AI discovers these optimal audiences automatically.

Through AI advertising tools like Performance Max within Google and Meta’s Advantage+, tech giants like Google, Meta and LinkedIn remove much of the detailed work involved to manually match a brand’s target persona. Instead of marketers carving out customer segments considered best for the campaign, the AI discovers these optimal audiences automatically and generates personalised ads with every click. So it isn’t just targeting. It’s automated persuasion at scale — invisible, relentless, and largely unregulated.

With nearly 3.4 billion people using Meta’s apps (Facebook, Instagram and WhatsApp) each day, the company has massive amounts of data on the human population.

According to MarketBeat, an Inc. 5000 financial media company, Meta’s Advantage+ Shopping saw rapid adoption out of the gate. In initial testing, the company said Advantage+ users were seeing a 32 per cent increase in return on advertising spend (ROAS) compared to its non-automated campaigns. In April 2023, nine months after its release, daily revenue from Advantage+ Shopping campaigns increased by 600 per cent in just six months.

By the third quarter of 2023, Advantage+ Shopping was generating USD10 billion (RM42.21 billion) in annual run-rate revenue and by the fourth quarter of 2024, Advantage+ Shopping campaign revenues scaled past USD20 billion (RM84.42 billion) in annual run-rate. They also grew 70 percent from Q4 2023.

Meanwhile, Google has seen a 93 per cent adoption rate of Performance Max among retailers running Google shopping ads.

Who Regulates the Algorithm?

There is yet no clear legal framework internationally, much less in Malaysia, to oversee how ads are targeted or how profiling works.

However, Alex C. Engler, a Fellow in Governance Studies at The Brookings Institution, says this does not mean regulators should sit idly by. Instead they should actively study algorithmic systems in their regulatory domain and evaluate them for compliance under existing legislation.

He notes that some regulatory agencies have started this work, including the U.S. Federal Trade Commission’s (FTC) Office of Technology and Consumer Financial Protection Bureau (CFPB), new algorithmic regulators in the Netherlands and Spain, and online platform regulators such as the UK’s Office of Communications (OFCOM) and the European Centre for Algorithmic Transparency.

Engler further suggests that as oversight agencies gather information about algorithmic systems, their societal impact, harms, and legal compliance, they should also develop a broad AI regulatory toolbox for evaluating algorithmic systems, particularly those with greater risk of harm.

This toolbox he says, should include means to expand algorithmic transparency requirements, perform algorithmic investigations and audits, develop regulatory AI sandboxes, and to welcome complaints and whistle-blowers.

Malaysia: Reclaiming Digital Autonomy

Although Malaysia has the Personal Data Protection Act (PDPA) 2021, the PDPA does not explicitly define any minimum standard for consent. It also does not regulate online privacy and has no provision on e-marketing, cookies or newer tracking and surveillance technology, such as geotagging. It also does not apply to personal data processed outside Malaysia. However, it is considered best practice for organisations operating in Malaysia to obtain informed consent from users for the use of cookies on websites, especially if they are collecting personal data. Companies that do not provide a cookie consent mechanism run the risk of non-compliance to the PDPA.

Fortunately, the Personal Data Protection Commissioner (PDPC) is considering issuing a data protection guideline that covers digital marketing.

The PDPC can learn from EU’s General Data Protection Regulation (GDPR) which regulates targeted advertising more stringently – mandating less intrusive advertising that uses less consumer data. It requires consumers to take positive action to provide consent, either by signing a form or clicking ‘I consent’ or ‘I agree’.” The GDPR defines consent as “being freely given, specific, informed, and unambiguous and given by a clear affirmative action”. Malaysia can definitely start by taking a leaf out of these EU guidelines.

By filling these gaps, Malaysia has an opportunity to lead the region by adopting clearer consent rules or stronger transparency standards.

Meanwhile, as consumers we should not be content to be sitting ducks. We need to understand and limit how we are profiled, and how much permission we surrender through our Apps and social media settings. We should proactively review and adjust our privacy settings to control who can view our posts, profile information, and activity.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

AI, Deepfakes, and the Right to Your Digital Selves

AI, Deepfakes, and the Right to Your Digital Selves

by Thulasy Suppiah, Managing Partner

As societies globally grapple with the disturbing rise of AI-generated deepfakes, a challenge highlighted by recent incidents abroad and here in Malaysia, Denmark has just proposed a groundbreaking solution that demands our attention. The Danish government plans to amend its copyright law to give every individual the right to their own body, facial features, and voice. This is a profound and necessary step in protecting human identity in the digital age.

For too long, the debate around deepfakes has been framed primarily as an issue of privacy or harassment, often placing a heavy burden on victims to prove harm after their likeness has been violated and spread across the internet. This new approach fundamentally shifts the paradigm. By treating a person’s identity—their face, their voice—as a form of personal intellectual property, it grants them a clear right of ownership.

This is not merely a subtle legal change; it is a game-changer. It means a victim would no longer need to prove reputational damage or malicious intent, which can be difficult and retraumatising. Instead, the case becomes a simpler one of unauthorised use of their “property.” This empowers the individual with a powerful legal shield and a direct path to demand removal of content and seek compensation.

Crucially, such a framework also establishes clear accountability for the tech platforms where this content proliferates. By outlining significant consequences for non-compliance, it sets clear legal and financial expectations for social media and messaging companies. This effectively transitions the responsibility from a reactive content moderation process to a proactive legal obligation, creating a clear imperative for them to prioritise the swift handling of non-consensual deepfakes.

While our authorities are rightly using existing laws like the Communications and Multimedia Act to prosecute perpetrators, these are often reactive measures. The kind of proactive governance being proposed in Denmark anticipates the inevitable misuse of rapidly advancing AI and creates a robust defence before the next wave of more realistic and accessible deepfake tools becomes available. It’s an attempt to legislate for the world we are entering, not the one we are leaving behind.

Of course, any such law must include exceptions for satire and parody to protect free expression. But the core principle remains: your digital likeness belongs to you.

As Malaysia continues its journey into the digital economy, we must consider if our own legal frameworks are truly fit for the AI era. The Danish model offers a compelling vision for how to restore digital autonomy and protect the dignity of our citizens. It sends an unequivocal message that a person cannot simply be run through a digital copy machine for any purpose, malicious or otherwise, without their consent. It is a thought-provoking and essential conversation we need to have now.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Featured Article: The World’s Rebalancing Act: Malaysia’s Moment to Shine

The World's Rebalancing Act: Malaysia's Moment to Shine

Published by The Star on 6 Mar 2025

by Thulasy Suppiah, Managing Partner

The global economic landscape is undergoing a profound transformation, driven by geopolitical realignments, most notably the US-China tech rivalry, and a widespread corporate imperative to ‘de-risk’ and ‘decouple’ supply chains. In this shifting terrain, Malaysia has admirably positioned itself as a stable and attractive hub for foreign direct investment (FDI). Microsoft’s recent reaffirmation of its substantial RM10.5 billion investment in cloud and AI infrastructure here, despite global pullbacks elsewhere, is a powerful testament to this trend and a vote of confidence in our nation’s potential.

This ‘flight to safety’ or search for strategic alternatives by multinational corporations (MNCs) presents a golden opportunity for Malaysia. We are currently benefiting as companies seek to diversify their operations and mitigate risks associated with over-concentration in any single market, particularly in light of ongoing trade disputes, semiconductor export controls, and vulnerabilities exposed by past global disruptions.

But this favourable tide is not self-sustaining. The very forces that benefit us today – trade tensions, potential tariffs, and shifting alliances – create an inherently volatile environment. To ensure Malaysia not only attracts but also retains high-quality FDI and solidifies its position as a key player in the global economy for years to come, we must adopt proactive and far-sighted strategies, rather than merely reacting to external pressures.

Firstly, strengthening our domestic fundamentals is non-negotiable. This means aggressive investment in a future-ready workforce through upskilling and reskilling initiatives, particularly in high-tech sectors like AI and advanced manufacturing. We need to cultivate a generation that are not just consumers of technology but creators and innovators. Continuous upgrades to our digital and physical infrastructure, including sustainable energy solutions for power-hungry data centres, are also paramount.

Secondly, our policy and regulatory environment must be a hallmark of stability, clarity, and adaptive agility. Predictable long-term policies, a streamlined bureaucracy that champions ease of doing business, and transparent enforcement are critical. Our regulatory frameworks must be robust enough to ensure good governance but flexible enough to accommodate and encourage innovation, being responsive to the needs of a rapidly evolving global economy.

Thirdly, a concerted effort to move Malaysia up the global value chain is essential. This involves strategically fostering indigenous innovation and attracting investments that bring not just capital, but also cutting-edge technology, R&D activities, and opportunities for local SMEs to integrate into sophisticated global supply chains. Focusing on niche specialisations where Malaysia can build a distinct competitive advantage will be key.

Finally, our international engagement and trade diplomacy must be astute and proactive. We need to continuously champion Malaysia as a reliable, neutral, and pro-business partner on the global stage, strengthening beneficial trade agreements and maintaining open dialogues with MNCs to understand their long-term strategies and concerns.

Malaysia currently finds itself in an enviable position, benefiting from global economic restructuring. However, this is not a moment for complacency but for concerted, strategic action. By building on our current strengths and proactively addressing future challenges, we can ensure Malaysia is not merely a beneficiary of transient global shifts, but a resilient and proactive architect of its own enduring economic prosperity.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles