Evolving Regulatory Landscape for Digital & Tech and the Latest Cybersecurity Act in Malaysia

Evolving Regulatory Landscape for Digital & Tech and the Latest Cybersecurity Act in Malaysia

By Thulasy Suppiah, Managing Partner of Suppiah & Partners &
Adjunct Professor Murugason R. Thangarathnam, Chief Executive Officer of Novem CS

Introduction

Malaysia has been resolutely updating its digital and technology regulations with forward-looking policies. They signify the nation’s aspirations to strengthen areas such as online safety, cybersecurity and data protection and governance, and to address the complex and global nature of the digital environment. Given the severity of potential harms, self-regulation by tech companies is insufficient to protect individuals and maintain trust. By strengthening data governance and establishing frameworks like the National Guidelines on AI Governance & Ethics, Malaysia is actively working to build a trusted and secure digital ecosystem for both consumers and businesses.

Several important developments have transpired in Malaysia’s digital regulatory landscape especially in the last two years, indicative of the government’s strong commitment to cultivate a safe digital ecosystem. For businesses operating or looking to operate in Malaysia, especially businesses in the telecommunications, technology, information security, or other infrastructure sectors, let us hold your hands and take you through these important developments.

First, the Ministry of Communications and Digital was separated into two ministries – the Ministry of Digital and the Ministry of Communications. The separation in 2023, clarified mandates for communications regulations versus digital governance. The Ministry of Digital now oversees the Personal Data Protection Department (PDPD) and, through its Minister Gobind Singh Deo, has proposed a Data Commission to execute the Data Sharing Act.

Then in August 2024, The Cyber Security Act 2024 (Act 854) came into force. This is a landmark piece of legislation in Malaysia aimed at strengthening the nation’s cyber defences and resilience against evolving cyber threats.

As of June 2025, major amendments to the Personal Data Protection Act (PDPA) took effect. The amendments include new requirements for mandatory data breach notification, the right to data portability, and the appointment of a Data Protection Officer (DPO). Businesses acting as data processors now face direct security obligations, while maximum fines for non-compliance have more than tripled to RM 1,000,000.

Malaysia was the first ASEAN Member State to enact a comprehensive data protection legislation in 2010 but the recent amendments align Malaysia’s data protection standards more closely with influential international frameworks like the EU’s GDPR (General Data Protection Regulation).

This paper aims to breakdown the key components and implications of the Cyber Security Act 2024 (CSA), vital to protect our digital environment and earn the trust of all Malaysians.

Overview of Malaysia’s Latest Cybersecurity Act

Key provisions and scope

The CSA 2024 establishes Malaysia’s digital defence framework by certifying the National Cyber Security Committee (NACSA) as the national lead agency with legislative power to ensure the effective implementation of this Act. It outlines the duties and powers of the Chief Executive of NACSA, as well as the functions and duties of the National Critical Information Infrastructure (NCII) sector leads and NCII entities.

The NCII is essentially the central nervous system of a country—the most vital computer systems, networks, and data that keep essential services like banking, electricity, telecommunications, and agriculture, running – the stuff that absolutely must work for society to function normally. It is the information and the digital technology that is so important to a nation that if it were to be shut down, destroyed, or seriously damaged, it would have a devastating impact on national security, the economy, or public health and safety.

The CSA sets the mandatory cybersecurity standards for NCII operators, and creates a licensing regime for cybersecurity service providers to regulate incident response and practice across the country. The Act also has extra-territorial application, to the extent that it imposes requirements for any NCII that “is wholly or partly in Malaysia”.

Objectives and regulatory framework

The primary goal of the CSA is to ensure a secure, trusted, and resilient cyberspace in Malaysia and to safeguard critical national functions. Its key objectives can be broken down as such:

  • To enhance Malaysia’s overall cyber defence capabilities and resilience against emerging and sophisticated cyber threats.
  • To establish a comprehensive legislative framework for the protection of the National Critical Information Infrastructure (NCII)
  • To establish the necessary governmental structures and legal powers to oversee national cybersecurity policies, with the NACSA as the lead implementing and enforcement agency.
  • To regulate the quality and integrity of the cybersecurity services provided in Malaysia through a mandatory licensing regime.
  • To institute clear, mandatory standards for managing cyber threats and reporting cyber security incidents, particularly those affecting the NCII.

The CSA identifies the 11 sectors designated as NCII sectors, and mandates strict compliance for organisations operating within them.

These sectors, listed below, are now legally required to enhance their cyber resilience or face penalties:

  • Agriculture & Plantation
  • Banking & Finance
  • Defence & National Security
  • Energy
  • Government
  • Healthcare Services
  • Information (Communication & Digital)
  • Science, Technology, & Innovation
  • Trade, Industry, & Economy
  • Transportation
  • Water, Sewage, & Waste Management

To manage the 11 NCII sectors, the Act allows the Minister to appoint multiple NCII Leads per sector for flexibility. All appointed Leads will be publicly listed on the NACSA website.

Enforcement mechanisms and penalties

The Act applies to licensed cybersecurity service providers (CSSPs) that are designated as NCII entities and the penalties are substantial, including large fines and long imprisonment terms for noncompliance.

The key mechanisms used to ensure compliance and investigate violations are:

Duty to Provide Information Relating to NCII: NCII Entities must provide all requested NCII information to the Sector Lead, automatically report the acquisition of any new NCII, and notify the Lead of any material changes to the NCII’s design, configuration, security, or operation. Failure to comply with any of these duties carries a penalty of up to RM100,000 fine, two years imprisonment, or both.

Duty to Implement the Code of Practice: NCII Entities must implement the measures, standards, and processes specified in the Code of Practice. However, they may use alternative measures if they prove an equal or higher level of NCII protection. Failure to comply can result in a fine up to RM500,000, imprisonment up to ten years, or both.

Duty to Conduct Cybersecurity Risk Assessment and Audit: NCII Entities must conduct mandatory cybersecurity risk assessments (at least annually) and audits (at least once every two years). The results must be submitted to the Chief Executive. Failure to conduct these assessments or submit the reports can lead to a fine of up to RM200,000 or imprisonment for a term not exceeding three years, or both.

Duty to Notify Cyber Security Incidents: NCII Entities have a strict legal duty to immediately report cyber security incidents to the Chief Executive and their Sector Lead (with a detailed report required within a short timeframe, typically 6 hours for initial details). The initial notification should describe the cybersecurity incident, its severity, and the method of discovery. A full report must be submitted within 14 days, including details such as the number of hosts affected, information on the cybersecurity threat actor, and the incident’s impact. Noncompliance invites penalties of up RM500,000 or imprisonment for a term not exceeding ten years, or both.

Cybersecurity Incident Response Directive: Upon receiving a notification of a cybersecurity incident from an NCII Entity, the Chief Executive will investigate and may issue a directive on necessary measures to respond to or recover from the incident. The term “directive” underscores the importance of compliance. Failure to adhere to these directives may result in a fine of up to RM200,000 ringgit or imprisonment for a term not exceeding three years, or both.

Licensing: The CSA establishes a licensing regime for individuals and entities providing prescribed cybersecurity services. There are currently two categories of prescribed cyber security services: (i) managed security operation centre monitoring services; and (ii) penetration testing services. To obtain a licence, an application must be made to the Chief Executive with a prescribed fee and required documents (including qualifications and ID). Applicants must meet prerequisites set by the Chief Executive and have no convictions for fraud, dishonesty, or moral turpitude. The Chief Executive can approve the licence (with variable conditions) or refuse it (stating the grounds). Operating without a required licence is an offence. Providing or advertising services without a licence will incur a fine of up to RM500,000 or imprisonment up to ten years, or both. A fine up to RM200,000 or imprisonment up to 3 years, or both will be imposed for a breach of license conditions.

A broad extra-territorial scope: The CSA’s authority extends beyond Malaysia’s physical borders. The extraterritorial reach is particularly important for foreign companies that operate services or infrastructure in Malaysia, especially those designated as NCII Entities. If a foreign multinational company’s Malaysian subsidiary owns or operates NCII in Malaysia, the foreign parent company and its personnel can potentially face legal consequences under the CSA for offences or non-compliance related to that Malaysian NCII. Foreign-based CSSPs whose services (like managed security or penetration testing) affect NCII within Malaysia must also comply with the Act’s licensing requirements and standards.

Comparative Analysis with Singapore

Malaysia’s Cyber Security Act 2024 (CSA) is fundamentally like Singapore’s Cybersecurity Act 2018 (SG CA) – both are national laws designed to protect critical digital infrastructure. Both Acts establish a dedicated national agency with primary authority: the National Cyber Security Agency (NACSA) in Malaysia and the Cyber Security Agency in Singapore

While both Acts are primarily designed to protect infrastructure with critical information that is the NCII in Malaysia and the Critical Information Infrastructure (CII) in Singapore, the main differences lie in the severity of penalties, scope of regulation, and specific reporting requirements.

Malaysia’s penalties for non-compliance are generally harsher. For instance, our maximum fine is up to RM500, 000 and/or imprisonment up to 10 years for serious noncompliance (e.g., failure to report an incident or implement the Code of Practice). Singapore’s SG CA 2018 was less severe but its 2024 amendments have increased penalties, allowing for civil penalties up to S$500,000 (RM1,626,160) or 10 per cent of annual turnover for the entity, whichever is greater. However, the maximum penalty for certain core breaches (like failing an audit) in Singapore, is generally lower than Malaysia’s for similar offences.

Malaysia’s CSA also primarily focuses on criminal penalties (fines and/or imprisonment) for non-compliance while Singapore employs a flexible mix of civil and criminal penalties. The Cybersecurity Agency can pursue civil penalties instead of criminal ones for certain breaches.

In terms of the scope of incidence reporting, the CSA primarily focuses on incidents directly affecting the NCII entity itself. Singapore’s SG CA has a broader scope following its 2024 amendments, requiring CII owners to report incidents involving their third-party vendors and supply chains.

Malaysia’s CSA mainly focuses on regulating NCII Entities and CSSPs. The 2024 amendments to the SG CA expanded its regulatory scope to include new categories like: Foundational Digital Infrastructure (FDI) providers (e.g., cloud services and data centres, even if they do not directly own a CII), Entities of Special Cybersecurity Interest (ESCI) and Systems of Temporary Cybersecurity Concerns (STCCs).

The SG CA’s amendments also allow the Cyber Security Agency to regulate systems wholly located outside Singapore if the owner is in Singapore and the system provides an essential service to Singapore. The Singaporean amendment focuses on the location of the controlling entity (the owner/operator) and the impact of the service on Singapore. If a Singapore-based entity controls a system that is critical to Singapore’s essential services, that system is covered, even if it is physically entirely offshore. Whereas the CSA’s initial extraterritorial scope applies to NCII that is wholly or partly in Malaysia. In essence, the provision ensures that the law has the necessary power to protect Malaysia’s vital national functions from cyber threats, regardless of where the attacker or the negligent party is situated, if the affected critical system has a link to the country’s NCII entities. If a component or the operation itself is linked to Malaysia, it is covered.

In terms of similarities between the two Acts, owners and operators of the designated critical infrastructure must comply with similar core duties: conducting risk assessments and audits, adhering to Codes of Practice/Standards, and reporting cyber security incidents.

Both Acts establish a licensing regime for CSSPs to regulate the quality of services, especially those provided to critical sectors. Both laws have provisions for offences committed outside of their respective countries if those offences impact the nation’s critical infrastructure.

Do Malaysia’s cyber laws measure up to EU standards?

Malaysia’s CSA shares a strong resemblance with the European Union’s primary cybersecurity regulation, the Network and Information Security Directive 2 (NIS2).

NIS2 is the EU’s key framework for critical and important sectors; and significantly broadens the scope and imposes stricter requirements than the original NIS Directive.
The similarities between Malaysia’s CSA and the EU’s NIS2 are in their sector focus and core requirements, which both mandate risk management strategies, incident reporting and breach notification procedures, clearly defined governance roles, regular security audits and vulnerability assessments, and resilience testing to ensure readiness against threats.

NIS2 is mandatory across the EU and brings higher expectations — and penalties — than before. Noncompliance can lead to significant fines and even personal liability for company leadership. The significant difference between the CSA and the NIS2, is the personal liability that company leadership face in case of noncompliance.

The GDPR is the EU’s flagship regulation for data privacy and security. It has become the de facto global benchmark for privacy regulation, influencing new laws in countries across the world (including the recent amendments to Malaysia’s PDPA). It sets the standard for how organisations must handle personal data, regardless of whether they are based in the EU or simply processing data from EU residents. The Malaysian government’s 2024 amendments to the PDPA brings it closer to the standards of the GDPR, but key differences remain.

The scope of application of the GDPR is very broad and applies to personal data processing across all sectors, including commercial, non-commercial, social, and governmental activities (except where exempted). Whereas the Malaysian PDPA primarily applies to the processing of personal data in the context of “commercial transactions.” The Federal and State Governments are largely exempt.

The GDPR applies to all organisations—regardless of size or sector—that collect or process personal data of individuals in the EU. This includes companies based outside the EU if they target or track EU users (e.g. via websites, apps, or services).

While the PDPA also has an “extraterritorial effect” it applies to entities established outside Malaysia only if they use equipment in Malaysia to process personal data and those that use data processors in Malaysia. The PDPA does not apply to the Malaysian Federal Government, the State Governments, or any personal data processed outside of Malaysia unless it is intended for further processing in the country.

The GDPR sets a high standard for consent – it must be “freely given, specific, informed, and unambiguous”. Implied consent is considered insufficient. The PDPA only requires explicit consent for Sensitive Personal Data, but implied consent can be sufficient in some other cases.

Penalties for the GDPR can reach up to €20 million (RM97,798,000.00) or 4 per cent of the global annual turnover, whichever is higher. Beyond compliance, GDPR builds trust with customers and business partners through transparent data practices. Recent amendments (in 2024) have increased the maximum fine to RM1 million (approx. €200,000 to €250,000) and/or imprisonment. The key difference is that PDPA penalties are fixed monetary fines, not calculated as a percentage of a company’s global annual turnover.

While the PDPA is a strong domestic law that is actively evolving to be more compatible with the GDPR, particularly in areas like breach notification, data portability, and requirements for the Data Processing Officer (DPO), its penalties and scope remain less comprehensive.

Key Challenges and Opportunities in Malaysia

The CSA 2024 introduces significant changes that will have far-reaching implications for businesses operating in Malaysia, particularly those designated as NCII entities.

This could include increased costs, particularly in the areas of enhanced cybersecurity infrastructure, personnel, and potential penalties for noncompliance. This would involve upgrading existing systems, implementing new security protocols, and potentially hiring additional cybersecurity professionals. The requirement for regular risk assessments and audits will also incur ongoing costs.

Similarly, as Malaysia embarks on implementing data portability, the broad, non-sector-specific scope of these rights may challenge businesses across all industries, requiring them to develop secure processes and technologies, which could increase costs, especially for smaller enterprises.

On the flip side, the CSA also creates significant opportunities across the cybersecurity, technology, and professional services sectors with the explosion in demand for cybersecurity products and services across the 11 designated NCII sectors. It has created a high demand for qualified firms to conduct mandatory, periodic risk assessments, compliance audits, and gap analyses for hundreds of NCII entities, for purchasing and implementing security controls, software, and hardware to meet the new, stringent technical standards in the Codes of Practice. There will be an increased need for Managed Detection & Response (MDR) Services to ensure incidents are detected and reported to NACSA within the required short timelines. Finally, licensed providers gain a competitive edge and become the mandated choice for NCII entities seeking to outsource critical security functions.

Conclusion:

Malaysia’s CSA 2024 marks a significant step forward in strengthening the nation’s digital defences through a more coordinated national effort and aims to create a more secure digital environment for both local and international companies operating in Malaysia. Future legislative changes may continue this trend, potentially broadening the scope to include areas like Virtual Critical Information Infrastructure (CII). It signifies the country’s move from a largely voluntary and advisory approach to a mandatory, punitive, and focused regulatory framework for critical sectors.

However, businesses are still struggling with full execution, staff shortages, incident reporting hurdles, and disparate levels of preparedness. Feedback from early adopters (as reported in an article by Bank Info Security in September 2025) did raise questions about how much detail should go into six-hour incident reports, how severity thresholds should be defined and how to align overlapping obligations under the PDPA and CSA. Clearly, a considerable amount of work remains for businesses to grasp what compliance would mean in practice.

While recent laws provide a strong foundation, questions remain about Malaysia’s readiness to address emerging technologies through legislation. The current legal framework still lacks specific laws for Artificial Intelligence (AI) and quantum technology.
For AI, only voluntary, non-binding National Guidelines on AI Governance and Ethics (AIGE) exist, and the Digital Minister has noted existing general laws are inadequate for AI-driven cybercrime. Similarly, the exponential growth of IoT in smart cities, agriculture, transportation, and energy expands the attack surface, necessitating secure device design standards, continuous monitoring, and anomaly detection frameworks. Proactive regulation and industry collaboration will enable Malaysia to harness technological innovation while preserving cybersecurity integrity.

Meanwhile, specific, binding quantum cybersecurity laws remain under development. Although the CSA is a key step, the translation of domestic agreements into concrete, real-time mechanisms for cross-border cybersecurity collaboration and policy harmonisation is still a work in progress. Addressing these gaps will require targeted policies, added responsibilities to current agencies, or the creation of new departments.

Recommendations for stakeholders and policymakers

To further strengthen Malaysia’s cybersecurity posture, a concerted emphasis on public–private partnerships will be crucial. Such cooperation can foster information sharing, threat intelligence exchange, and coordinated incident response across sectors. Sector-specific cybersecurity forums, joint simulation exercises, and innovation incentive programmes can significantly enhance national cyber resilience. By cultivating trusted alliances that go beyond legislative mandates, Malaysia can better anticipate and mitigate the increasingly sophisticated threats confronting its digital economy.

Capacity building is also essential for Malaysia’s cybersecurity ambitions. The persistent shortage of qualified professionals impedes effective implementation of CSA requirements across both public agencies and private enterprises. Expanding cybersecurity education and training, introducing targeted scholarships, and developing a robust ecosystem of certification and professional development programmes are necessary to address the talent gap and equip future leaders with expertise in emerging threat domains such as AI-driven attacks and quantum computing risks, to ensure the long-term sustainability of Malaysia’s cyber defence capabilities.

As cyber threats are dynamic in nature, Malaysia’s cybersecurity governance must remain adaptive and forward-looking. Ongoing regulatory evolution is essential to address fast-changing technological landscapes—particularly around AI governance, IoT proliferation, and cloud security. Establishing a regulatory sandbox, encouraging innovation-friendly policies, and implementing periodic legislative reviews will help balance stringent security measures with flexibility for digital growth. This will ensure Malaysia remains agile, resilient, and recognised as a trusted digital hub in Southeast Asia and beyond.

Additional Outlook for Malaysia’s regulatory framework – what is in store

Just this month, Fintech News Malaysia, reported that to counter rising and increasingly sophisticated cybercrime, Malaysia is implementing a multi-pronged national strategy focused on structural and legal reform: at the core is the introduction of a comprehensive Cyber Crime Bill to replace outdated legislation, granting law enforcement the necessary legal strength to address complex digital crime and enhance national security. Furthermore, the NACSA is spearheading the creation of a new Centre for Cryptology and Cyber Security Development, which is envisioned as the national hub for advancing digital resilience and sophisticated cyber defences. Finally, to ensure a faster and more efficient response against scams, the National Scam Response Centre (NSRC) will be restructured under the Royal Malaysia Police (PDRM) to tighten coordination, accelerate incident handling, and streamline investigations.

Likewise, ongoing consultations on Data Protection Impact Assessments (DPIAs), Privacy-by-Design, and automated decision-making show that Malaysia is proactively addressing future technological challenges. These consultations are being led by the Personal Data Protection Department (PDPD) and are part of a broader effort to update the regulatory landscape following the Personal Data Protection (Amendment) Act 2024. By initiating public consultation on these advanced topics, Malaysia is effectively future-proofing its data protection laws to govern the ethical and secure use of emerging technologies.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Key Trends in Medicine: AI Powered Healthcare Innovations

Key Trends in Medicine: AI Powered Healthcare Innovations

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Introduction

A shortage of 11 million healthcare workers is expected by 2030, the World Economic Forum reports, but it is hopeful that advances made by artificial intelligence (AI) in healthcare will help bridge that gap. With its ability to ease tasks, summarise large data sets, reduce time and achieve higher accuracy than humans, it is indeed a wonder that adoption of AI by the healthcare sector remained for a long time “below average”. However, as AI gets smarter, and learns better, more and more spaces in healthcare are bowing to automation. Here are some areas in healthcare that are benefitting from the latest AI and digital learning (DL) applications.

Precision Diagnosis

For strokes caused by a blood clot, time is of essence. Doctors would want to know the initial onset time to determine the right treatment.


Researchers from Imperial College London, the University of Edinburgh, and Technical University of Munich have enhanced stroke timing estimation using AI. They trained the algorithm they developed on a dataset of 800 brain scans with known stroke times, allowing the model to independently identify affected regions in CT scans and estimate stroke timing.


The team then tested the algorithm on data from almost 2,000 other patients. The software proved to be twice as accurate as using a standard visual method. The algorithm also excelled in estimating the “biological age” of brain damage, indicating how much the damage has progressed and its potential reversibility.


The research study leader, Dr. Paul Bentley from Imperial College London said, the accuracy of this data will help doctors make emergency decisions to administer the best response in stroke patients.

Higher Accuracy

Healthcare powered by data and smart automation is also helping to reduce misdiagnosis.
Among the most common mistakes made at accident and emergency (A&E) units in the UK, are that as many as 10 per cent of fracture cases are either overlooked or diagnosed late by medical professionals.

This could lead to further injury or harm to the patient, worsening their condition, delaying treatment, and making it harder for hospitals to quickly treat and turnover patients.
The National Health Service (NHS) in the UK has now been given the green light by the National Institute for Health and Care Excellence (Nice) to use AI as a way of improving fracture detection when examining X-rays.
Clinical evidence suggests that using AI may improve detection in scans, compared with a medical professional reviewing on their own, “without increasing the risk of incorrect diagnoses”, Nice reportedly told The Guardian.

Nice says the technology is safe, reliable and could reduce the need for follow-up appointments.

AI-powered Assistance

Imagine if you could avoid long wait hours in crowded rooms just to have your healthcare questions answered by a doctor. How helpful would it be to minimise the number of times you had to pay for ever increasing clinical consultation costs?

AI virtual assistants are the saviour both overworked clinicians and hospital staff as well as anxious patients have been waiting for. They are AI-powered apps that chat with patients, clinicians, and staff by voice or text.

Digital assistants speed up triage, answer patient questions, schedule appointments, and automate repetitive tasks – traditionally tasks that required many hands and great effort. It can even help explain lab results. This frees staff to focus on care, cuts down wait time, and checks costs.

Virtual assistants can present as chatboxes on hospital websites, voice hubs at nursing stations, or prompts on tablets in waiting rooms. In an AI powered chatbox, a patient with an inflamed toe might type in their symptoms, and the assistant flags any danger signs (like a high fever) before suggesting home care or a quick clinic visit. On the admin side, digital assistants sort schedules, handle billing questions, and coordinate referrals.

That the global AI virtual assistant market in healthcare reached USD677.93 million (RM 2,869 million) in 2023 and is estimated to hit USD9295.63 million (RM39339.11 million) by 2030, is testament to its need and demand.

Machine Learning Applications

For many chronic diseases, by the time they present symptoms and the individual goes to the doctor because of an ailment or visible observations, it is often too late.

A new AI machine learning (ML) model can detect the presence of certain diseases before the patient is even aware of any symptoms, according to its maker AstraZeneca.

Using medical data from 500,000 people who are part of a UK health data repository, the machine could predict with high confidence a disease diagnosis many years later.

Slavé Petrovski, who led the research, told Sky News: “We can pick up signatures in an individual that are highly predictive of developing diseases like Alzheimer’s, chronic obstructive pulmonary disease, kidney disease and many others,” he said.

Another example where machine learning has made great strides is a technology developed by IBM Watson Health and Medtronic to continually analyse how an individual’s glucose level responds to their food intake, insulin dosages, daily routines, and other factors, such as information provided by the app user.

For example, are certain foods worsening the patient’s glucose control? Are there particular days or times where a person’s glucose goes high or low? The Sugar.IQ diabetes management application (App) leverages AI and analytic technologies to help people with diabetes uncover patterns that affect their glucose levels. This allows them to make small adjustments throughout the day to help stay on track.

Sugar. IQ provides information that show how lifestyle choices, medications, and multiple daily injections impact diabetes management and the time spent with glucose in the target range. It provides individualised guidance in understanding and managing daily diabetes management decisions, so that people on multiple daily insulin injections have more freedom to enjoy life.

Idiopathic Pulmonary Fibrosis (IPF) is a severe, chronic lung disease that progressively impairs lung function. It affects approximately five million people worldwide with a median survival of only three to four years. Available treatments can only slow its progression, and are unable to halt or reverse the disease.

AI significantly accelerated the drug discovery process for IPF and reduced the timeline from target identification to preclinical candidate selection to just 18 months – a major advancement in the efficiency of pharmaceutical research.

Insilico Medicine used AI-driven algorithms to design Rentosertib to treat IPF. It is the first AI-designed drug – where both the biological target and the therapeutic compound were discovered using generative AI.

Insilico Medicine is now engaging with global regulatory authorities to proceed with further trials aimed to evaluate Rentosertib’s efficacy and expedite its path to regulatory approval. If successful, Rentosertib could become the first AI-discovered therapy to reach patients, potentially transforming the treatment landscape for IPF.

AI is transforming drug discovery, delivery and administration. AI-designed drugs show 80-90 percent success rates in Phase I trials compared to 40-65 percent for traditional drugs. AI based tools such as ML and DL reduce development timelines from more than 10 years to potentially 3-6 years and cut costs by up to 70 percent through better compound selection.

Assisting in Surgical and clinical procedures

It may be too soon to speak of robots performing all the procedures in a surgery, but in operating theatres, AI and robotics are already assisting surgeons to handle surgical instruments, enhance precision, reduce invasiveness, and improve patient recovery.

The emergence of deep neural networks associated with modern computational power has produced reliable automation of certain tasks in medical imaging, including time-consuming and tedious workflows such as organ segmentation. Segmentation produces measurements and automatic extraction of quantitative features, which cannot be performed in everyday clinical practice.

In aortic and vascular surgery clinics, for instance, challenges existed during routine clinical follow-up for abdominal aortic aneurysms (AAAs). Longitudinal comparison of diameter measurements across consecutive tomography angiography (CTA) exams was cumbersome. It required the recall of multiple prior exams from the picture archiving and communication system of the hospital, measuring them, and comparing measures.

Augmented radiology for vascular aneurysm (ARVA) was designed to include automatic fetching of prior CTAs for separate analysis and automatic longitudinal comparison of each aortic segment. The use of cloud-based computing services enables processing of the multiple CTA data sets and the secure return of the report back to the hospital network within minutes. In the hospital, these reports are then automatically identified and placed into the patient’s hospital file or in any review workstation. This saves substantial time in everyday aortic clinic processes.

Early detection of epidemics and its spread

AI and ML technologies can also forecast the onset of certain epidemics and track their global distribution using historical data that is available online, satellite data, current social media posts, and other sources. ProMED-mail, a reporting tool that operates online and keeps track of epidemic reports from around the world, will likely be the best example of a monitor to help check an epidemic before it causes significant harm.

Operation Optimisation of Healthcare systems

According to the National Library of Medicine, a typical nurse in the US devotes 25 per cent of her working hours to administrative and regulatory tasks. Technology may easily replace these tedious operations. Today, hospitals are using AI to predict peak times, improve bed management, and enhance staff scheduling for optimised resource allocation. For example, one hospital used AI-driven predictive models to adjust staffing based on patient volume, reducing wait times and improving patient throughput.

AI models are also being used in emergency departments to predict patient admission rates, reducing bottlenecks and improving care delivery. By forecasting the number of patients arriving at the ED, hospitals can optimise their staff allocation, reduce patient wait times, and provide faster care.

It’s not tech vs. human

While AI is making great inroads in healthcare, the complete replacement of medical professionals in medicine is still a long way off. The need for human interaction in healthcare is likely to keep AI on the sidelines as a complement, rather than a substitute, for doctors.

The Medical Futurist put forward five fundamental reasons why AI won’t replace doctors – and never will.

  • Empathy – A doctor-patient relationship is built on empathy and trust; and listening and responding in a way that helps the patient feel understood. Very few people are likely to trust an algorithm with life-altering decisions. These are qualities that cannot be fully replicated by artificial intelligence.

  • Physicians have a non-linear working method to arrive at a diagnosis – no algorithm or robot can have the creativity and problem solving skills required to arrive at a diagnosis.

  • Complex digital technologies require competent professionals – It is more worthwhile to programme AI with those repetitive, data-based tasks, and leave the complex analysis/decision to the complex human brain.

  • There will always be tasks robots and algorithms cannot perform – like the Heimlich maneuver.

  • It has never been tech vs. human – the goal has always been to use tech to help humans.

Ethical and Regulatory Considerations

Regulating AI in the healthcare sector is proving to be a complex and sensitive challenge. While the benefits of software as a medical device (SaMD) are great, patients still need protection from defective diagnosis, unacceptable use of personal data and bias built into algorithms.

The growing integration of AI and ML in drug development demands proactive management of ethical and regulatory challenges to ensure safe applications.

In response, regulatory bodies like the United States Food and Drug Administration and the European Medicines Agency are actively developing AI safety parameters and promoting diverse population validation, informed by detailed regulatory guidelines for robust, ethical AI technologies.

The FDA’s AI/ML SaMD Action Plan focuses on regulating software as a medical device:

  • Predetermined Change Control Plan (PCCP): Allows for modifications to AI/ML software over time, ensuring continuous monitoring and updates while maintaining safety and effectiveness. The basic idea is that as long as the AI continues to develop in the manner predicted by the manufacturer it will remain compliant. Only if it deviates from that path will it need re-authorization.

  • Good Machine Learning Practices (GMLP): Guidelines to evaluate and improve machine learning algorithms for medical devices.

  • Transparency: Efforts to ensure clear communication about AI-enabled devices to patients and users.

In the United Kingdom, the Regulatory Horizons Council of the UK, which provides expert advice to the UK government on technological innovation, published “The Regulation of AI as a Medical Device” in November 2022. This document considers the whole product lifecycle of AI-MDs and aims to increase the involvement of patients and the public, thereby improving the clarity of communication between regulators, manufacturers, and users.

The National Medical Products Administration (NMPA) of China, which provides regulatory oversight on medical products, published the “Technical Guideline on AI-aided Software” in June 2019. This guideline highlighted the characteristics of deep learning technology, controls for software data quality, valid algorithm generation, and methods to assess clinical risks.

Then in July 2021, the NMPA released the “Guidelines for the Classification and Definition of Artificial Intelligence-Based Software as a Medical Device”, which includes information on the classification and terminology of AI-MDs, the safety and effectiveness of AI algorithms, and whether AI-MDs provide assistance in decision making such as clinical diagnosis and the formulation of patient treatment plans.

Later, in 2022, the Centre for Medical Device Evaluation under the NMPA published the “Guidelines for Registration and Review of Artificial Intelligence-Based Medical Devices”. These guidelines provide standards for the quality management of software and cybersecurity of medical devices taking into consideration the entire product’s lifecycle.

Perhaps the European Union’s AI Act has provided the most stringent standards for regulating SaMDs.

Under the Act, AI systems such as those in AI/ML-enabled medical devices, are classified as “high-risk”. This is the highest risk classification for permitted uses of AI which triggers a cascade of compliance requirements Risk management is the focal point, and is intertwined with the EU MDR risk-management system to identify, evaluate, and mitigate the ‘reasonably foreseeable risks’ that high-risk AI systems can pose to health, safety, or fundamental rights such as privacy and data protection.

The EU AI Act’s extra-territorial reach is akin to the EU General Data Protection Regulation (GDPR), transcending European borders and impacting international AI system providers and deployers. It applies to ‘providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country’ and providers and deployers established outside the EU if ‘the output produced by the system is used in the EU.

Whether any of these regulatory frameworks will actually ensure public trust and compliance while still fostering innovation will depend very much on continuous monitoring and engagement with feedback from all stakeholders including scientists, doctors and patients.

Regulations should be robust and allow for continuous improvement to ensure it achieves its intended purpose.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

You Are the Product: How Targeted Ads Became the Most Powerful Tool of Influence in the Digital Age

You Are the Product: How Targeted Ads Became the Most Powerful Tool of Influence in the Digital Age

“It’s not just what you buy — it’s what you think, fear, and believe. And someone paid to shape it.”

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Introduction: The Hidden Power of the Ad Box

Ads used to sell shoes. Now they sell narratives. The practice of highlighting the features and benefits of products and services to a mass audience has evolved. Enter the age of programmatic marketing where big data, and particularly our data has reshaped how and why advertisers target you and I. Today’s stories are curated to reach us based on our emotional vulnerabilities and individual interests. It reaches us through our personal devices and social media platforms the moment we click on something online. These narratives could be overt or covert, but highly personalised based on analyses of our personal demographics and online footprint, making today’s advertisements a precise and potent tool of influence or exploitation.

In Malaysia, numerous charlatans used Artificial Intelligence (AI) and deepfake to manipulate the image of Datuk Siti Nurhaliza, a local artist with a massive following, to market fraud investments. They also misused the brand identities of trusted online media portals (like The Star and Free Malaysia Today) to scam her followers. One fraudster was even able to imitate her voice and generate fake video calls to tug at the heartstrings of fans, inviting them to invest in the same platform as her.

While the use of big data, visual media and social media platforms to sell narratives have revolutionised branding, there is a dark side to how personal data is being used to psychologically tune and manipulate consumers’ vulnerabilities. On the one side, organisations are under pressure to acquire increasingly detailed information about their consumers, on the other end, ad fraudsters are stealing this information to unethically benefit themselves.

The New Advertising Industrial Complex

Unlike traditional marketing, programmatic advertising relies on real-time insights of consumer online behaviour and interests, to automate precise advertisement space buying on a large scale. Using consumer’s personal information, advertisers are able to get the right brand in front of the right audience at the right time, within seconds. Such software, known as ad-tech (advertising technology) or supply side platform (SSP), can apparently access thousands upon thousands of publishers’ (owners or managers of websites with ad space to sell) sites at once to sell advertising space to the highest bidder.

Here’s what’s happening at the blink of an eye, behind the scenes during each programmatic advertising auction:

Targeting

  • When I visit a website, the publisher’s platform puts the ad space up for grabs. At the same time, the ad-tech software leverages my activity data to match the most suitable ads.

Bidding

  • In milliseconds, the software automatically calculates and places real-time bidding (RTB) for that ad spot based on all the data-surveillance they have derived about me.

Ad Serving

  • The advertiser with the highest bid wins! Their ad instantly appears on my screen.

Optimisation

  • With every impression, advertisers gather performance data to optimise future bids and improve targeting.

All of the above happens within seconds. While advertisers were initially enchanted, the increased dominance of ad space by just a few ad-tech companies raised concerns. Alphabet (Google’s parent company), Amazon and Meta control more than half (55 per cent) of global advertising spend outside China this year, according to Warc’s latest Q2 2025 Global Ad Spend Forecast.

This over-dominance allows Big Tech companies to raise prices, control transparency and what we see online, and limit opportunities to ad space bid winners. But companies are fighting back. Now, ad buyers are looking for SSP’s or ad tech companies that can benefit them in a positive way. Before they sign with a programmatic marketplace operator, they ask a critical question: How much access will their company have to quality ad inventory — and how much exposure do they have to the junk? SSP’s are now under pressure to provide more transparency and accountability, all detailed through structured contracts.

Data Extraction as Default

Every single moment, Apps, social media platforms, our devices and the websites we visit, are gathering data about our online visits, how much time we spend there and the type of device or browser we use. It saves our preferences and personal information, notes our location and what we’ve left in our online shopping cart, then shows us personalised content based on all this data.

Our online activity is usually tracked with a cookie or pixel which identifies us even after we leave the site. Our activity can also be tracked over different internet-connected devices, like our laptop and smartphone.
According to a 2022 study by cybersecurity company NordVPN, on average, a website has 48 trackers. Some sites sell this data to third parties (like Google). Information collected is used to serve more targeted and intrusive ads; some that follow us from website to website.

When a website we visit tracks us, that’s first-party tracking. When a website we visit lets another company track us, that’s third-party tracking.

Third-party tracking companies can track us across most websites visited. For example, if I visited a website about a country I wish to travel to, I might almost immediately see ads suggesting hotel accommodation options while visiting other websites.

Tracking our online footprint has become the default setting, and our consent is often buried deep in fine print. In 2022, NordVPN found that around 30 per cent of third-party trackers belong to Google, 11 per cent to Facebook, and 7 percent to Adobe. As of 2025, Google still has the biggest share of trackers. Thankfully, on the other extreme, several browsers are actively combating third party cookies. Brave, Firefox and Safari have blocked third party cookies by default since 2019, to make our online life more private.

Brave is also the only browser that offers to randomise fingerprint information. Digital fingerprinting is a method to build a profile of me or you based on our system configuration. It can include information about our browser type and version, operating system, plug-ins, time zone, language, screen resolution, installed fonts, and other data. Even when third-party cookies are turned off, sites can still identify us through fingerprinting – a more worrisome concern as this function cannot be removed. Even if we delete our cookies, we can be recognised through our digital footprint.

In 2024, Google announced it will no longer phase out third-party cookies in Chrome. However, it will allow users to make informed choices about their web browsing privacy. Overall, there seems to be pressure for the tracking landscape to change, and hopefully this translates to safer online browsing for all.

Targeted Ads vs Targeted Harm

According to Forbes Magazine, advertisers know that 91 per cent of consumers are more encouraged to purchase when a brand personalises its communication with them. So they build their messaging based on an audience’s demographics — who they are, what they like, where they are located and what they are most likely to purchase. There are key benefits to this approach. It is effective to market products to those most likely to buy them.

For example, let’s say my dad has just retired and is keen to pick up diving. As he searches online to facilitate this new hobby, a retargeting campaign would suggest safety gear, resorts for the best diving experience, diving coaches or a local diving community – most of which turn out to be extremely helpful and provide value to my dad. He might also end up supporting a remote but extremely gifted maker of diving suits.

While targeted ads are the smartest spend in marketing, they can put consumers at risk when the targeting becomes predatory. Scammers can buy our personal data and use it for purposes more devious than targeted ads and advertising campaigns.

  • Financial ads targeting the poor

In 2013, the US Senate Commerce Committee found that data brokers were targeting poor consumers by grouping them based on their financial vulnerability. Among terms used to categorise the poor into subsets were: “Zero Mobility”, “Burdened by Debt – Singles”, “Hard Times”, “Humble Beginnings”, “Very Elderly”, “Rural and Barely Making it”. This data was then used by unscrupulous parties to market risky financial products or illegal loans with high interest rates to those who could least afford them.

What began as personalisation becomes profiling — and often, exploitation.

While some data brokers prohibit customers from misusing personal information to sell debt-related products, there is lack of industry oversight to enforce these contract terms.

  • Investment scams preying on individuals looking for high returns

In 2024, social media scams in Malaysia continued to be a significant issue. The Securities Commission Malaysia (SC) identified social media platforms such as Facebook and messaging apps like Telegram as primary channels for online investment scams. Victims were targeted with unlicensed products and services. In 2024 alone, the Royal Malaysia Police’s Commercial Crime Investigation Department recorded 35,368 online scam cases, resulting in RM1.6 billion in financial losses—accounting for 84.5 per cent of all commercial crimes reported during the year.

There has also been the increased use of deep fake technology to impersonate influential figures such as Datuk Siti Nurhaliza to draw fans into investment scams.

  • Discriminatory ads

A study released in 2019 entitled Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes revealed how the Facebook algorithm could skew the delivery of ads for employment and housing opportunities “along gender and racial lines”, which violates antidiscrimination laws.

  • Predatory ads

Predatory programmatic advertising refers to the unethical or illegal use of automated ad buying and placement techniques to exploit vulnerabilities in individuals. For instance weight-loss ads which target young users, and cosmetic procedures which target women.

  • Filter bubbles

Every day, the content we see and engage with online is increasingly personalised to our interests, preferences and demographic information. Google for instance is excellent in customising our search results based on our location information or past search history. Facebook does the same thing for our News Feed, by analysing which posts, friends, and pages we interact with the most to boost content they believe we will likely engage with. An example of content personalisation and targeted advertising taken to the extreme is the Cambridge Analytica (CA) scandal where millions of US-based voters were targeted for disinformation campaigns and, to some extent, to influence the outcome of the 2016 US election.

When the Ad Becomes the Story

Ads now blur into content itself.

  • Social Media Influencers or Key Opinion Leaders

Indeed the rise of content creation by social media influencers (SMIs) has transformed brand marketing. Influencers who generate attractive content and who are themselves attractive, are highly sought after by brands for paid partnerships. In Malaysia, influencer marketing is particularly effective due to the country’s high social media usage (nearly 90 per cent of the total population of 31 million). According to an article by Bernama, 75 per cent of Malaysians make purchases based on influencer recommendations.

One of the key strengths of Malaysian influencers is their ability to engage authentically with their followers. According to an article by Statista, Malaysian consumers, especially younger audiences, prefer influencers who present relatable and genuine content. This evolving landscape is reshaping brand partnerships, urging companies to focus on authenticity and meaningful interactions to resonate with their target demographics.

The influencer advertising market in Malaysia is projected to grow by 10.79 percent (2024-2028) resulting in a market volume of USD102.30 million (RM431.8 million) in 2028.

Some social media influencers, while endorsing brands, also use their online platforms to promote good causes. Nandini Balakrishnan, a SAYS video producer, is known for promoting body positivity, while Deborah Henry (a Malaysian model, emcee and TV/podcast host) has been highlighting the plight of refugees for over 10 years. Through her influence, she co-founded Fugee.org, a non-profit that helps refugees living in Malaysia through education, advocacy and entrepreneurship.

Unfortunately, there are downsides to online influencers. A recent study by the University of Portsmouth examined the negative impacts some influencers have. The study found that some SMI’s endorse unhealthy or dangerous products such as diet pills, detox teas, and alcohol without full disclosure. Others spread misinformation, encourage unrealistic beauty standards, foster a comparison culture, promote deceptive consumption, and cause privacy risks.

The study found that the use of filtered and curated images by SMIs added to body dissatisfaction, low self-esteem and harmful beauty practices. It also found that influencer-driven content fuelled lifestyle envy and social anxiety, leading to negative self-comparison and diminished wellbeing.

Dr Georgia Buckle, Research Fellow in the School of Accounting, Economics and Finance at the University of Portsmouth, said: “Social media influencers hold immense power over consumer decisions and cultural norms. While they provide entertainment, inspiration, and brand engagement, the unchecked influence of some SMIs can lead to serious ethical and psychological consequences. Our study highlights the urgency for both academic and industry stakeholders to address these challenges proactively.”

According to a study done by Noémie Gelati and Jade Verplancke from Linköping University in Sweden, consumers identify and create links with influencers, driving them to follow influencers’ recommendations. This relationship impacts young consumers on a different level due to their immaturity and lack of understanding about marketing. The study noted that those around 19-24 years old are more prone to follow influencers “Indeed, (young) followers tend to purchase what the persons they idealise use or wear… Clothing, make-up and even cosmetic surgery, followers aspire to look like their favourite influencers and the beauty ideal they diffuse.”

Finally, influencers themselves are often under immense pressure to produce captivating content which strikes a delicate balance between authenticity and market appeal. This requires a lot of thought and special skill, often leading to stress and burnout. Additionally, the work requires them to maintain a certain public image, increasing the strain to their mental well-being.

  • Brand memes

Corporate memes are another powerful tool for brands looking to connect with younger demographics in a more casual and relatable way. According to Twitter, tweets with images receive 150 per cent more retweets than text-only posts, while meme content specifically tends to generate 60 per cent higher engagement rates compared to standard branded content.

  • Algorithms prioritise ‘engaging’ content

Social media algorithms use engagement, relevance, and user behaviour to determine which posts appear in our feeds. High engagement signals that content is valuable, increasing its visibility. While these systems are designed to enhance the user’s experience and engagement, they often unintentionally create an echo chamber. Users who follow unethical influencers can end up seeing more unethical or misleading content. Some algorithms can amplify extremist propaganda and polarising narratives. These amplifications can lead to societal divisions, promote disinformation, and bolster the influence of extremist groups. Often these types of content use emotionally provocative or controversial material and by focusing on metrics such as “likes” and “shares”, algorithms create feedback loops that take users down a rabbit hole.

AI: The Engine Behind the Curtain

AI is no longer just a backend efficiency tool — it is the central nervous system of modern advertising.

  • Machine learning determines which ad you see and when.
  • Reinforcement learning constantly tests variations to see what you click, skip, or share.
  • Generative AI personalises ad copy, images, and tone in real-time based on your digital behaviour.
  • Platforms use AI-driven predictive models to infer your mood, political leanings, spending habits — even when you’re most likely to be impulsive.
  • Instead of marketers carving out segments they think are best for an ad campaign, the AI discovers these optimal audiences automatically.

Through AI advertising tools like Performance Max within Google and Meta’s Advantage+, tech giants like Google, Meta and LinkedIn remove much of the detailed work involved to manually match a brand’s target persona. Instead of marketers carving out customer segments considered best for the campaign, the AI discovers these optimal audiences automatically and generates personalised ads with every click. So it isn’t just targeting. It’s automated persuasion at scale — invisible, relentless, and largely unregulated.

With nearly 3.4 billion people using Meta’s apps (Facebook, Instagram and WhatsApp) each day, the company has massive amounts of data on the human population.

According to MarketBeat, an Inc. 5000 financial media company, Meta’s Advantage+ Shopping saw rapid adoption out of the gate. In initial testing, the company said Advantage+ users were seeing a 32 per cent increase in return on advertising spend (ROAS) compared to its non-automated campaigns. In April 2023, nine months after its release, daily revenue from Advantage+ Shopping campaigns increased by 600 per cent in just six months.

By the third quarter of 2023, Advantage+ Shopping was generating USD10 billion (RM42.21 billion) in annual run-rate revenue and by the fourth quarter of 2024, Advantage+ Shopping campaign revenues scaled past USD20 billion (RM84.42 billion) in annual run-rate. They also grew 70 percent from Q4 2023.

Meanwhile, Google has seen a 93 per cent adoption rate of Performance Max among retailers running Google shopping ads.

Who Regulates the Algorithm?

There is yet no clear legal framework internationally, much less in Malaysia, to oversee how ads are targeted or how profiling works.

However, Alex C. Engler, a Fellow in Governance Studies at The Brookings Institution, says this does not mean regulators should sit idly by. Instead they should actively study algorithmic systems in their regulatory domain and evaluate them for compliance under existing legislation.

He notes that some regulatory agencies have started this work, including the U.S. Federal Trade Commission’s (FTC) Office of Technology and Consumer Financial Protection Bureau (CFPB), new algorithmic regulators in the Netherlands and Spain, and online platform regulators such as the UK’s Office of Communications (OFCOM) and the European Centre for Algorithmic Transparency.

Engler further suggests that as oversight agencies gather information about algorithmic systems, their societal impact, harms, and legal compliance, they should also develop a broad AI regulatory toolbox for evaluating algorithmic systems, particularly those with greater risk of harm.

This toolbox he says, should include means to expand algorithmic transparency requirements, perform algorithmic investigations and audits, develop regulatory AI sandboxes, and to welcome complaints and whistle-blowers.

Malaysia: Reclaiming Digital Autonomy

Although Malaysia has the Personal Data Protection Act (PDPA) 2021, the PDPA does not explicitly define any minimum standard for consent. It also does not regulate online privacy and has no provision on e-marketing, cookies or newer tracking and surveillance technology, such as geotagging. It also does not apply to personal data processed outside Malaysia. However, it is considered best practice for organisations operating in Malaysia to obtain informed consent from users for the use of cookies on websites, especially if they are collecting personal data. Companies that do not provide a cookie consent mechanism run the risk of non-compliance to the PDPA.

Fortunately, the Personal Data Protection Commissioner (PDPC) is considering issuing a data protection guideline that covers digital marketing.

The PDPC can learn from EU’s General Data Protection Regulation (GDPR) which regulates targeted advertising more stringently – mandating less intrusive advertising that uses less consumer data. It requires consumers to take positive action to provide consent, either by signing a form or clicking ‘I consent’ or ‘I agree’.” The GDPR defines consent as “being freely given, specific, informed, and unambiguous and given by a clear affirmative action”. Malaysia can definitely start by taking a leaf out of these EU guidelines.

By filling these gaps, Malaysia has an opportunity to lead the region by adopting clearer consent rules or stronger transparency standards.

Meanwhile, as consumers we should not be content to be sitting ducks. We need to understand and limit how we are profiled, and how much permission we surrender through our Apps and social media settings. We should proactively review and adjust our privacy settings to control who can view our posts, profile information, and activity.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Why AI’s Next Leap in Southeast Asia May Begin in Malaysia

Why AI’s Next Leap in Southeast Asia May Begin in Malaysia

Scaling in ASEAN isn’t plug-and-play. Here’s what works

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

ASEAN Is the Next AI Frontier — with young digital natives exploding growth in data consumption, businesses surging cloud computing demand, and governments actively promoting digitalization. 40 percent of governments are already implementing national cloud adoption strategies. These trends are driving the region – with its 680 million population – to undergo rapid digital transformation. With demand rising for digital and data infrastructure, ASEAN has become a critical hub for technology, connectivity and data-driven growth. Although underdeveloped in terms of AI penetration, the World Economic Forum notes that this region is more interested in AI’s potential benefits and less concerned with its risks, reflecting a culture of acceptance and exploration. It also noted that strong government support and investment are fostering AI adoption and deployment, with several countries already developing their own national AI frameworks. At the same time, funding for research and development has increased, and regulatory sandboxes allow for experimentation.

What AI Firms Need to Scale in ASEAN — and Where They Struggle

In the first half of 2024 alone, more than USD30 billion (almost RM128 billion) was committed to building AI-ready data centers across Singapore, Thailand, and Malaysia; laying the foundation for accelerated computing, AI services and data growth. It has positioned the region for long-term success. Opportunities abound for language AI, logistics, fintech, and public sector applications. Many ASEAN countries are still in the “greenfield” stage of AI adoption, so early movers here will have an advantage.

However, it would be wrong to assume that one playbook works across all the borders. AI firms looking to expand in ASEAN should understand that each country has different laws, diverse data sovereignty baselines and no common AI standard. Expansion in this region will require planning, not brute force.

- Start by understanding unique local industry objectives

While ASEAN’s diverse user base provides for unique product market-fit opportunities, it also means that ICT providers must co-develop industry-specific solutions, strengthen ecosystem collaborations, and drive data-led sales and commercial strategies before businesses can unlock the full potential of their digital transformation efforts. Companies should not just push for technology adoption but align AI initiatives with core objectives of local businesses. They should identify specific challenges and deploy AI solutions strategically, for businesses can unlock new levels of efficiency, productivity and innovation. Success hinges on a strategic approaches, focused on creating tangible value.

- Pivoting and Adding Value Where There are Constraints

There is high friction entering certain countries in ASEAN. Firstly, the AI expertise pool is still limited and unequally distributed. There is a shortage of skilled personnel across the AI spectrum, from machine learning engineers, data scientists and professionals in algorithm development and those able to critically evaluate AI solutions.

Secondly, the investment cost of implementing AI solutions remains expensive, if considering changes and transitions from legacy applications. Current cost structures often reflect US, UK and Japan’s benchmarks while local AI companies tend to cater to clients in developed regions due to higher margins. This limits their resources for domestic projects. This means that some ASEAN nations will require subsidies to see return on investments (ROI).

Thirdly, typical Initial Setup Costs for AI Projects, especially large scale implementations involve substantial write-off costs, anywhere from USD5,000 to USD500,000 (RM21,274 to RM2.12 million) depending on tools, machine learning models, data security, and requirements for hardware. This leads some companies to favour smaller-scale data analytics solutions.

Fourth is the ethical concern of job displacement. While AI could increase productivity, it will also disrupt the workforce significantly. McKinsey estimates the loss of 23 million jobs to automation by 2030. New jobs will replace old jobs. How will the job market shape up? How will governments cope with unemployment? These considerations may delay decisions around AI projects.

Finally, legal uncertainty in some markets causes deployment paralysis. AI-specific regulations are either absent or under discussion in many ASEAN countries. Current regulations are mostly focused on the existing Personal Data Protection Act (PDPA) based on the European General Data Protection Regulation (GDPR) model. The localisation of these could take more than three years, with further delays during election cycles.

Malaysia vs. Singapore: Overflow Strategy in Action

EY ASEAN’s Joongshik Wang, remarked, “While enterprises (in Singapore) are keen to invest in emerging technologies, many businesses struggle to bridge the gap between pilot and full-scale implementation. This is often due to integration challenges, lack of clear return on investment (ROI), and the need for stronger ecosystem support to drive business value.” While Singapore boasts the most robust legal framework, it lacks in other areas.

Higher costs for utility, employment and land in Singapore, also make it less attractive for AI infra buildout. But on the other side of the straits, along with high quality and ubiquitous connectivity, Malaysia has larger tracts of land available, a cheaper workforce and the necessary infrastructure. It also has the physical space for data centers and manufacturing parks to scale.
While Singapore is a great environment for firms to set up their headquarters, Malaysia is an excellent cost-effective complement to Singapore’s HQ functions.

Why Malaysia? It’s ASEAN’s Shortcut for Getting Things Right

As ASEAN Chair, Malaysia is in an enviable position to lead the digital transformation of the region. Benefiting from its close proximity to Singapore, along with ample power and water resources, Johor (a state in Malaysia’s southern tip) has in recent years attracted major hyperscalers including Microsoft, Equinix and NTT; with Stack’s 220-megawatt facility being the latest. Other hyperscalers such as Amazon Web Services, Google, Alibaba, and Huawei also have a firm footing here.

Their presence is evidence of the trust they have in Malaysia’s resources and connectivity, high quality ports and cloud zones, and dependable infrastructure and logistics.

- Malaysia Actively Promotes and Welcomes AI Collaboration Initiatives

At a recent “Strengthening ASEAN-China Cooperation” forum, Chairman of the Centre of Regional Strategic Studies (CROSS), Lee Chean Chung said that Malaysia is well positioned to lead the regions digital transformation. “Malaysia’s strategic location, diverse and multilingual talent pool, robust infrastructure and collaborative mind-set make it a natural hub for AI development in the region.”

CROSS is already actively promoting AI policy development and facilitating regional cooperation. By encouraging policy frameworks supportive of responsible AI development and deployment, Malaysia is helping to shape a future where AI drives economic growth and fosters shared prosperity and equity.

Through CROSS, Malaysia has promoted forward looking AI policy development and cooperation. It has encouraged the establishment of joint ASEAN-China AI research centers, cross border innovation hubs and regional talent development programmes.

Malaysia is also visionary and focused, able to articulate its need for on the ground support to ensure a long term globally ready workforce. In this, Lee hoped ASEAN and China will collaborate to invest in science, technology, engineering and mathematics (STEM) education, launch AI fellowship programmes, and expand youth exchange initiatives.

- Malaysia’s Digital Government Is Laying Serious Groundwork

Beyond just policy and ideas, the Malaysian Government has set an example by driving its own AI adoption and readiness. Malaysia’s Digital Ministry has harnessed generative AI under its five year AI technology plan. Recently 445,000 public officers were given access to Google Workspace’s latest generative AI capabilities to scale up AI adoption across the civil service and enhance government service delivery. The first phase of the programme, AI at Work, was introduced in December 2024 alongside the launch of Malaysia’s National AI Office (NAIO). As the central authority to champion Malaysia’s AI agenda, the NAIO is a further show of Malaysia’s commitment to position the country as a regional leader in AI technology and applications.

- Malaysia’s Existing Industries create real AI demand

Malaysia’s manufacturing sector (with an expected GDP contribution of RM587.5 billion by 2030), has been actively integrating AI in automation, logistics and quality control.

For instance, SMART Modular Technologies (SMART), a global leader in specialty memory and storage solutions, uses AI-powered high-speed precision industrial robots at its Malaysian facility to identify and isolate manufacturing defects.

Another example is KVC, a leading B2B distributor of electrical products, solutions, and related services. It has leveraged IBM Robotic Process Automation (RPA) to enhance the finance department’s Procure-to-Pay processes, improving operational efficiency, reducing errors and lowering costs by automating key tasks such as invoice extraction, matching, and accurate payment processing; accelerating workflow and driving efficient financial management.

Similarly, the retail and food and beverage (F&B) sector are also using AI. From marketing to inventory management, and analyses of past sales to predict demand and seasonal trends more accurately, restaurants can now order just the right amount of stock to reduce waste and protect profit margins.

Some F&B businesses are even using AI to test new flavour combinations, cutting down on research and development (R&D) time.

Customer service in Malaysia has evolved too. Many online retailers like Zalora use AI chatbots to answer FAQs, and even suggest popular add-ons based on customer preferences.

Nevertheless many businesses, especially the Small and Medium Enterprises in Malaysia, struggle to bridge the gap between pilot and full scale implementation.

Malaysia’s Legal Framework Isn’t a Hurdle — it’s a Filter

Malaysia has been one of the first countries in the region to have adequate legal frameworks in place to regulate the use and misuse of internet technology and continues to draw up new laws accordingly.

The recently established Data Sharing Act 2025 establishes a legal framework for data sharing between government-to-government public sector agencies and between government agencies and businesses. It aims to improve government efficiency, enhance transparency, and ensure data security. The Act also seeks to protect sensitive and confidential information, strengthening data security through structured, and accountable data governance. It proves Malaysia’s recognition that data continues to drive decision-making and digital transformation, and its commitment to navigate this new digital regulatory environment effectively.

Malaysia was also among the first ASEAN nations to establish the Personal Data Protection Act (2010) to regulate the processing of personal data in commercial transactions by Data Users and protect the interests of Data Subjects.

The government has also issued the National Guidelines on AI Governance & Ethics (AIGE) which outlines the obligations of end users, policymakers and developers. Although not legally binding, it proposes seven core principles which are: Fairness; Reliability, Safety, and Control; Privacy and Security; Inclusiveness; Transparency; Accountability; and Pursuit of Human Benefit and Happiness.

While there is no dedicated AI law yet, an AI Bill is in the works. But Malaysia’s policy environment is made predictable and stable through its existing Local Government, Intellectual Property, Contracts and Employment laws.

Malaysia also has skilled local legal navigators present to help AI firms avoid missteps in interpretation and execution of Malaysia’s regulatory framework.

Conclusion: Malaysia Doesn’t Replace ASEAN — It Unlocks Its Pros:

Firms entering Malaysia gain operational clarity and regional access. It’s where complexity becomes manageable — and strategy beats speed.

As a start, Malaysia is definitely looking for firms able to deploy capable AI solutions with lower burn rates and faster iteration cycles. Those deploying affordable, efficient AI stacks (e.g. from China) can leapfrog into ASEAN from here. Chinese technology companies such as Baidu, Alibaba and Tencent have been active in developing open-source AI models for many years. Their strategy, supported by Chinese universities and the government, can be seen as an “open innovation” model aimed at accelerating research and development and leapfrogging past the US. The fact that high-quality open-weight Large Language Models (LLMs) are now available means that Malaysia can access them at far lower cost than before and can now run its own LLMs without having to transfer sensitive data to commercial third parties or foreign countries, giving it greater data autonomy.

- Success Requires a Local Partner and a Legal Firm to Navigate and Execute

Successfully setting up in Malaysia will also require a local partner, good legal counsel and a local IT firm for execution. As the regulatory environment is constantly in flux, legal firms with full understanding of local laws are a crucial first step for AI firms entering Malaysia to make vital decisions. With AI is already having an impact on everything from risk assessment and insurance underwriting to policies and claims processing, AI firms should look for attorneys who are also competent and current with the latest developments in AI and technology to help them find their firm footing in Malaysia, and beyond to ASEAN.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Beyond Algorithms: Shaping Malaysia’s Ethical Approach to Artificial Intelligence

Beyond Algorithms: Shaping Malaysia’s Ethical Approach to Artificial Intelligence

by Thulasy Suppiah, Managing Partner of Suppiah & Partners

Artificial Intelligence (AI) has transformed our lives with its convenience, seamlessly carrying out tasks that once required human intelligence. From auto-correcting sentences to producing creative content, AI has become an invisible force embedded into everyday activities. While AI systems hold immense promise, they differ fundamentally from traditional software due to their unique ability to learn, adapt, and evolve — creating new ethical challenges we must address.

According to Eleanor Manley, AI and Deep Learning Consultant and Co-founder of Metta Space, no one fully understands how deep learning AI works, not even its creators. In her TEDx talk, “Why AI Can’t Be Ethical – Yet”, she states: “For us to keep using AI, we need to trust it. And right now, we can’t, because we simply don’t understand enough about how it works.”

 

However, it is worth considering that not every technology requires users to understand its mechanics in order to trust it. Everyday technologies such as WiFi or GPS function reliably without most users needing to comprehend the underlying systems. What matters more is ensuring trustworthy outcomes, not necessarily full public comprehension.

Given the profound potential impact of AI, a key question arises: how do we ensure that AI decisions align with national values, corporate responsibility, and broader societal norms?

Before we answer, it is useful to reflect that sectors like healthcare and finance have successfully used ethical standards to guide growth and benefit humanity. Bioethics in medicine and fiduciary duties in finance have helped build trust, accountability, and resilience — lessons that AI governance can learn from.

Learning from Established Ethical Frameworks

Thankfully, several comprehensive frameworks are already in place to guide ethical AI development:

  • The UNESCO Recommendation on the Ethics of Artificial Intelligence (2022) serves as a global benchmark, offering universal principles for responsible AI.

  • The ASEAN Guide on AI Governance and Ethics (2024) provides region-specific guidance, reflecting the unique challenges and priorities of Southeast Asia.

  • Malaysia’s own National Guidelines on AI Governance and Ethics (2024), issued by the Ministry of Science, Technology and Innovation (MOSTI), adapt these global and regional standards to suit our local context and national values.

Five Pillars for Malaysia’s Ethical AI Approach:

Human-Centricity


AI should enhance, not replace, humanity. Human dignity, agency, and well-being must remain the central focus. Individuals must retain control over decisions that significantly impact their lives.

Fairness and Non-Discrimination

AI systems must be developed and monitored to prevent biases and ensure equitable outcomes for all Malaysians.

Transparency and Explainability

Trust relies on understanding. AI systems should be designed to be interpretable, with users able to understand how major decisions are made and to challenge unfair outcomes. Black-box models that erode trust should be avoided.

Privacy and Security


Strong protections must be in place for personal data. Privacy safeguards and cybersecurity measures are non-negotiable in maintaining trust.

Accountability, Reliability, Safety and Control

Clear responsibility lines must exist for AI outcomes. Developers and deployers must be identifiable and accountable, with mechanisms to ensure AI operates reliably, predictably, and safely.

Global Regulation in Action: The EU AI Act

Meanwhile, the European Union AI Act is the first major regulatory effort to comprehensively govern AI, potentially setting a global standard. It places most responsibilities on developers and deployers of high-risk AI systems — which include AI used in critical areas such as healthcare, law enforcement, infrastructure, and employment.

Additionally, developers of General Purpose AI (GPAI) models — such as ChatGPT and Midjourney — must comply with specific obligations:

  • Provide technical documentation,
  • Publish summaries of training content,
  • Comply with the EU Copyright Directive, which ensures that AI does not unlawfully exploit copyrighted works.

These obligations are significant because they aim to improve transparency, protect human creators, and ensure that AI models do not become unchecked sources of misinformation or harm.

The Act also mandates systemic risk evaluations, adversarial testing, and incident reporting — ultimately benefiting users by building safer, more predictable, and less biased AI systems.

Creativity and Intellectual Property: A New Frontier

As for creative industries, the Kellogg Institute of Management rightly points out that the legal profession must rethink and update intellectual property (IP) laws. Current laws often struggle to address the blurred boundaries between human and machine-generated work.
Yet this is easier said than done: legal frameworks move slowly, while technology evolves rapidly. This gap raises important questions: can the law ever keep pace with AI innovation? And if not, can ethical principles fill the void until the law catches up?

Ethics may thus serve as a critical stopgap — guiding AI’s responsible development even in areas where formal laws remain unsettled.

Rethinking Leadership in an AI Era

The Kellogg Institute also calls on business leaders to move beyond simply reacting to consumer demands — which often favour short-term convenience over long-term wellbeing. Instead, it urges leaders to adopt a forward-looking mindset, much like Henry Ford once did by envisioning mass automobile use before there was widespread demand.
In the context of AI, this means scrutinising both the large costs and broad benefits across multiple stakeholders — consumers, creators, workers, and society at large. The call to action is clear: “Let’s start the dialogue now — before AI does it for us.”

Conclusion

By embracing ethical principles and frameworks early, Malaysia can unlock AI’s transformative potential while safeguarding the dignity, rights, and wellbeing of all its citizens. But success depends on continuous collaboration — among industries, academia, civil society, and everyday consumers — and a shared commitment to ethical awareness as technology continues to evolve at unprecedented speed.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Leveraging AI for Improved Road Care and Safety

Leveraging AI for Improved Road Care and Safety

By Thulasy Suppiah, Managing Partner of Suppiah & Partners, and Ramakrishna Damodharan of Robomy Sdn Bhd (https://robo.my/) a company which has developed AI solutions for road and highway maintenance and management.

While Peninsula Malaysia boasts excellent connectivity through its network of roads and expressways, some sectors are poorly maintained. Between 2022 and July 2024, the Road Accident Management System (RAMS) under the Works Ministry reported 181 road accidents caused by potholes, including 23 fatal ones. Of 223 accidents recorded in Selangor between 2018 and 2020 due to poor road conditions, 148 resulted in death.

In April last year, the Johor Baru Sessions court awarded RM721,000 to a 49-year-old man who suffered injuries when his motorcycle hit a pothole in 2021. This incident highlighted failures of state-appointed private companies to fulfil their road maintenance duties. Infrastructure management can be challenging, and traditional methods of road inspection, tedious and time consuming. Solutions are usually reactive. They result in bad road-patching practices, the use of inferior materials and ignore issues caused by water flow. 

Robomy, an AI R&D firm, emphasises that if properly executed, the use of AI in road infrastructure management could transform road safety in Malaysia. Through data analytics, computer vision, and advanced sensor technologies, AI-powered road assessment systems can provide real-time insights by processing large datasets within minutes. For instance, Robomy’s proprietary solution, Robolyze, is designed to monitor road conditions, detect defects such as potholes, cracks, and sunken patches, and even predict potential hazards. This provides proactive, cost-efficient solutions to inspect, monitor, and maintain roads. An important AI feature is its predictive capability, enabling strategic and preventive maintenance. Predictive analytics, a core component of one of Robomy’s products, allows early detection of road deterioration, optimising maintenance schedules, and reducing repair costs. This approach prevents catastrophic failures.

As our cities grow and road networks expand, the need for smart, innovative technologies to maintain road infrastructure efficiently has never been greater. AI can perform this role. In Singapore where manpower is limited, AI powered solutions help detect potholes, water ponding, slanted lamp posts, damaged traffic signs or grille covers, and broken manholes. Machine learning automatically detects defects from smartphone footage, grades their severity and highlights those in need of repairs. As a result, Singapore has one of the best maintained road networks in the world.

Robomy has brought similar innovations to Malaysia. Robolyze is tailored to address local challenges such as tropical weather impacts, varying road construction standards, and diverse urban-rural landscapes. It integrates cutting-edge AI capabilities, and allows real-time data processing directly from sensors and cameras installed on vehicles or road infrastructure.

 

This reduces reliance on centralized data centers, enhances response times, and ensures continuous monitoring even in remote areas.

As more organisations and state entities look to deploy AI in road infrastructure management, there are important legal considerations. Advancements in machine learning, computer vision, and use of autonomous vehicles and sensor technology raise issues related to data privacy, algorithmic transparency, liability and ethics.

While Malaysia has no clear AI regulatory framework or policy yet, stakeholders are required to analyse existing laws and regulations governing AI applications across various sectors.

The Ministry of Science, Technology, and Innovation (MOSTI) is responsible for establishing AI governance and launched the National Artificial Intelligence Roadmap 2021–2025 to address risks associated with AI; and in December 2024, the government established the National AI Office (NAIO) to drive AI-based digital transformation.

Meanwhile, the Ministry of Communications, as the implementer of the Communication and Multimedia Act (CMA), holds the legislative power and governs activities in digital spaces in addition to the hardware that enables their functions.

The Cyber Security Act 2024 addresses the management of cyber security threats and incidents related to the National Critical Information Infrastructure (NCII). This is particularly relevant as AI-driven road infrastructure applications—such as pothole management systems—require access to government-maintained databases, including mapping systems, traffic flow data, and road maintenance records at both Federal and State levels. Ensuring secure and authorized access to these databases is crucial to prevent cyber threats that could compromise public safety.

From a contractual standpoint, AI-powered road management solutions must align with the Contracts Act 1950, particularly in defining liability, accountability, and transparency in AI decision-making. Key legal considerations include the enforceability of AI-generated contracts, the attribution of liability for erroneous AI-driven maintenance recommendations, and the need to ensure fairness in automated decision-making processes, such as prioritizing road repairs without bias or undue influence.

Furthermore, AI-powered pothole detection and predictive maintenance systems process vast amounts of personal data, including vehicle movement patterns, dashcam feeds, and geolocation data. The Personal Data Protection Act 2010 (PDPA) remains the primary legislation regulating the processing of personal data in commercial transactions in Malaysia.

Any entity deploying AI in road infrastructure must comply with the seven Personal Data Protection Principles, ensuring data security, informed consent, and lawful processing of personal information. Compliance with these legal frameworks is essential to ensure AI-driven road infrastructure applications operate transparently, fairly, and within Malaysia’s regulatory landscape.

AI is set to transform road infrastructure by enabling smarter, more efficient, and proactive maintenance solutions. From detecting potholes before they become hazards to optimizing repair schedules based on real-time data, AI enhances road safety and resource management. By integrating AI into road care, authorities and stakeholders can reduce costs, minimize disruptions, and improve overall road conditions for the public. At the same time, the legal landscape must evolve to support this shift—ensuring clear contractual frameworks with AI solution providers, addressing accountability in automated decision-making, and mitigating risks such as data security concerns. With the right balance of innovation and regulatory safeguards, AI-driven road infrastructure can pave the way for safer, more sustainable transportation networks.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Charting a Course for an Inclusive Digital Malaysia

Ethical AI: Charting a Course for an Inclusive Digital Malaysia

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Malaysia’s burgeoning AI landscape, from data centres to rapidly developing technologies, holds immense promise. Realising this potential, however, requires navigating complex challenges – infrastructure needs, skills gaps, and data security concerns, among others. Critically, we must also address the ethical dimensions of AI, ensuring this powerful technology serves all Malaysians equitably. We can chart a more inclusive, ethical, and prosperous digital future by focusing on a core set of guiding principles, adopted and applied appropriately across all levels of our society.

Fortunately, Malaysia doesn’t need to reinvent the wheel. Robust ethical frameworks for AI already exist. The UNESCO Recommendation on the Ethics of Artificial Intelligence provides a global blueprint, while the ASEAN Guide on AI Governance and Ethics offers a practical, regional perspective. MOSTI’s own National Guidelines on AI Governance and Ethics tailors these principles to our Malaysian context. Building upon these solid foundations, the following ethical pillars should guide our national approach, embraced by all stakeholders:

FOUNDATIONS OF ETHICAL AI – THREE KEY SOURCES

WHAT FRAMEWORKS CAN GUIDE ETHICAL AI DISCUSSIONS IN MALAYSIA?

[2022] UNESCO RECOMMENDATION ON THE ETHICS OF ARTIFICIAL INTELLIGENCE

A global framework that promotes fairness, transparency, and accountability while providing guidelines to protect human dignity and fundamental rights.

[2024] ASEAN GUIDE ON AI GOVERNANCE AND ETHICS

A region-specific guide that aligns AI practices with Southeast Asian values and offers practical steps for ethical AI governance and deployment.

[2024] MALAYSIA’S NATIONAL GUIDELINES ON AI GOVERNANCE AND ETHICS (MOSTI)

A local framework that adapts international principles to Malaysia’s context, offering voluntary guidance on ethical AI practices.

MALAYSIA’S ETHICAL AI PILLARS

WHAT CORE PRINCIPLES THAT CAN GUIDE MALAYSIA’S AI JOURNEY?

HUMAN-CENTRICITY

AI should serve humanity, not replace it. This fundamental principle emphasizes prioritising human well-being, dignity, and agency. We must ensure AI systems enhance human capabilities, not diminish them, and that individuals retain control over decisions that significantly impact their lives.

FAIRNESS & NON-DISCRIMINATION

AI systems must be free from bias, ensuring equitable outcomes for all Malaysians. This requires careful attention to data quality, algorithmic design, and ongoing monitoring to prevent perpetuating or exacerbating existing inequalities.

TRANSPARENCY & EXPLAINABILITY

Trust is built on understanding. AI systems should be understandable, allowing individuals to comprehend how decisions are made and providing avenues to challenge those outcomes. "Black box" algorithms erode trust and should be avoided if possible.

PRIVACY & SECURITY

Protecting personal data in our increasingly data-driven world is paramount. Robust data security measures and strict adherence to privacy regulations are non-negotiable.

ACCOUNTABILITY, RELIABILITY, SAFETY, AND CONTROL

Clear lines of responsibility are essential, encompassing the reliability and safety of AI systems. When AI systems cause harm, those responsible must be identifiable and held accountable. This necessitates robust oversight mechanisms and a commitment to building systems that operate as intended, minimising unintended consequences.

By embracing these ethical principles, Malaysia can unlock the transformative potential of AI while safeguarding the well-being of all its citizens. This is not solely the government’s responsibility; it requires continuous dialogue, collaboration, and a shared commitment to ethical awareness across all sectors – from industry and academia to civil society and individual citizens. Only through this collective effort can we ensure that AI contributes to a more just and prosperous future for all Malaysians.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Traffic Management Systems: Benefits, Considerations, And User Rights

TRAFFIC MANAGEMENT SYSTEMS:

BENEFITS, CONSIDERATIONS, AND USER RIGHTS

by Thulasy Suppiah, Managing Partner

Traffic Management Systems (TMS) are becoming increasingly vital in modern urban planning and infrastructure. These systems use a combination of sensors, cameras, and data analysis to monitor and manage traffic flow, reduce congestion, enhance road safety, and provide real-time information to both traffic authorities and road users. As cities grow and transportation demands increase, understanding the benefits and implications of TMS is crucial for stakeholders, including solution providers, users such as highway operators and government entities, and individual road users.

BENEFITS:

1. Improved Road Safety:
TMS enhances safety by identifying pedestrian and vehicle movements, employing intelligent signaling techniques, and automatically managing incidents. Real-time monitoring helps in detecting accidents and hazards, allowing for quick responses. Systems alert drivers to potential hazards like closed roadways or low visibility, encouraging safer driving practices.

2. Reduced Traffic Congestion:
One of the primary goals of TMS is to alleviate traffic bottlenecks. By using real-time data on traffic conditions and intelligent traffic control techniques, TMS optimizes traffic flow. Predictive analysis helps identify congestion-prone areas and redirect traffic accordingly.

3. Reduced Fuel Consumption and Emissions:
Efficient traffic control systems can lower fuel usage and vehicle emissions. Consistent traffic flow enables vehicles to maintain steady speeds, improving fuel efficiency. Strategic route development and congestion avoidance contribute to a more sustainable and ecologically friendly urban transportation landscape.

4. Improved Emergency Response Times:
TMS enables emergency vehicles to navigate congested areas more efficiently. Prioritizing routes using smart traffic lights and creating green corridors ensures that emergency services can reach their destinations faster, supporting rescue and emergency operations effectively.

5. Better Public Transit:
TMS prioritizes public transportation by optimizing transit routes, leading to improved service and increased ridership. This integration reduces traffic congestion and enhances transportation efficiency.

6. Decreased Noise Pollution:
By streamlining traffic flow and minimizing the need for frequent braking and acceleration, TMS helps reduce noise pollution. Smoother traffic patterns lead to quieter roadways.

7. Enhanced Accessibility for Pedestrians and Cyclists: Intelligent traffic arrangements provide dedicated lanes for cyclists and extended crossing times for pedestrians, promoting safety and convenience for non-vehicular road users.

8. Predictive Insights:
Smart traffic management systems offer predictive insights by analyzing data collected from traffic sensors. This data assists governing bodies in understanding roadway usage and making informed decisions.

CRITICAL CONSIDERATIONS FOR STAKEHOLDERS:

For Solution Providers:

Data Security and Privacy

Ensure that the TMS complies with data protection regulations, safeguarding user data from unauthorized access and misuse.

System Reliability

Implement robust testing and maintenance protocols to ensure the system operates reliably under various conditions.

Scalability and Adaptability

Design the system to be scalable and adaptable to future technological advancements and changing traffic patterns.

For Users (Highway Concessionaires, Government/Agencies/Town Councils):

System Integration

Ensure the TMS integrates seamlessly with existing infrastructure and other smart city initiatives.

Training and Support

Provide comprehensive training for personnel to effectively operate and maintain the TMS.

Performance Monitoring

Regularly monitor the system's performance to identify areas for improvement and optimization.

For Road Users:

Awareness of Rights

Understand your rights concerning data collection and usage by TMS and be informed about how traffic data affects route planning and traffic enforcement.

Safety and Compliance

Adhere to traffic regulations and be aware of real-time information provided by the TMS to ensure safe driving practices.

Feedback Mechanisms

Utilize available channels to provide feedback on the TMS, helping to improve its effectiveness and user experience.

EXAMPLES OF AI IN TRAFFIC MANAGEMENT SYSTEMS

Conclusion:

Traffic Management Systems offer numerous benefits, from enhancing safety and reducing congestion to improving environmental sustainability and emergency response times. However, successful implementation requires careful consideration of data privacy, system reliability, and stakeholder engagement. By understanding the benefits, considerations, and user rights associated with TMS, stakeholders can work together to create more efficient, safe, and sustainable urban transportation systems.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Grounded By Cyber Threats Aviation’s Growing Digital Vulnerabilities

GROUNDED BY CYBER THREATS AVIATION'S

GROWING DIGITAL VULNERABILITIES

by Thulasy Suppiah, Managing Partner

A few weeks ago, Japan Airlines (JAL) suffered a major cyberattack on one of the busiest days to fly – Boxing Day. While the resulting disruptions were temporary, it highlighted yet again the fragility of IT-dependent systems.

Beginning 7.24 am local time, the attack targeted network equipment connecting internal and external systems. This led to both domestic and international flight delays, with the airline’s app, and baggage handling systems also affected. At least 24 domestic flights were delayed by more than 30 minutes.

Whilst the threat was eliminated within a few hours, JAL had to temporarily shut down the affected router and suspended ticket sales for same-day flights resulting in considerable chaos and inconvenience to travelers. The airline later confirmed that the disruption resulted from a Distributed Denial of Service (DDOS) attack — their server was flooded with internet traffic to prevent users from accessing connected online services.

As airport, airline, air navigation and other travel or transport systems embrace digital transformation, including cloud migration, Internet of Things (IoT) integration, and AI-driven automation, its attack surface has expanded significantly. This makes the sector an attractive target for cybercriminals, nation-state actors and hacktivists.

In July last year, an enormous IT outage linked to a faulty CrowdStrike update, disrupted airlines globally, grounding over 10,000 flights and highlighting the industry’s reliance on interconnected digital systems. Though not a cyberattack, it had huge implications on airport systems and flights worldwide.

In June, Indonesia faced one of its worst cyberattacks with more than 40 government agencies impacted, and disrupting operations at major airports.

In 2018, Hong Kong’s national flag carrier, Cathay Pacific Airways admitted to a data breach involving the extensive personal data of some 9.4 million customers. Passengers’ personal information such as passport information including their nationality and date of birth; phone number; credit card information; identity card number; and even historical travel information was exposed.

In another ransomware attack last year, operations at Japan’s largest and busiest terminal port in the city of Nagoya were paralysed – making it unable to load and unload containers for three days. Located just 7 km south of the terminal is Chubu International Airport, an air gateway that operates in coordination with the sea port. The attack on The Nagoya Port Unified Terminal System (NUTS) – such a critical infrastructure in Japan, handling 10 percent of the nation’s trade – highpoints the significant ripple effects such incidents could have on essential services and supply chains not just in Japan but for the global economy.

Skift – an online source for travel news – highlighted an Imperva 2024 Bad Bot Report, which found that the travel industry suffered the second-highest volume of account takeover attempts in 2023. Around 11% of all cyberattacks targeted the sector and Cornelis Jan G, a Senior Cyber Threat and OSINT Analyst, from the Netherlands, says the aviation industry can expect to face an escalation in cyber threats in the next 12 to 24 months.

“State-sponsored groups will continue to target aviation for strategic intelligence and economic espionage, while cybercrime syndicates will increase their focus on ransomware and supply chain attacks,” he wrote in an article (Reference Item 9). He believes the industry will benefit from increased investment in AI-driven threat detection technologies, and a focus on a zero-trust architecture which limits lateral movement within networks. Callie Guenther, a cyber-threat research senior manager at Critical Start, in a comment to Infosecurity Magazine about the Nagoya cyberattack said, organisations need to stay informed about the latest ransomware trends, leverage threat intelligence sources to understand the evolving tactics, techniques, and procedures by ransomware operators, and adjust their security strategies accordingly.

For successful implementation of cyber security in the aviation industry, AI and tech-focused law firms play an imperative role. They provide essential and tailored legal services to navigate the complexities of AI integration.

Boeing for instance relies on its legal team to ensure compliance with strict Federal Aviation Administration (FAA) regulations and safety standards. United Airlines engages legal experts to establish guidelines for its AI applications in customer service, to prevent bias in AI algorithms and to ensure fair customer interactions. They also consult on transparency measures to let customers know how their data is used. Delta Airlines seeks risk management advice for AI predictive maintenance to mitigate potential liability issues related to operational failures.

Airbus engages legal services to negotiate contracts with its software vendors. These contracts are necessary to define the scope of work, data ownership and liability for AI-driven analytics. This is essential for the interests of both the aircraft company and the vendor, and to ensure compliance with aviation regulations.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

Cyber Threats Unmasked: Malaysia’s Legal Safeguards

CYBER THREATS UNMASKED:

MALAYSIA'S LEGAL SAFEGUARDS

brought to you by Suppiah & Partners

The cybersecurity landscape continues to evolve with various emerging threats, such as AI-driven cyberattacks and deepfake scams that leverage advanced technologies for malicious purposes.

Organisations must remain vigilant against these evolving threats while adhering to local regulations that govern cybersecurity practices in Malaysia.

DDOS ATTACK

DESCRIPTION

A Distributed Denial-of-Service (DDoS) attack aims to disrupt normal traffic by overwhelming a web property with massive requests from multiple devices (botnet).

CHARACTERISTICS

  • Utilizes multiple compromised devices (bots).
  • Targets network bandwidth or application resources.
  • Does not require access to internal systems.

OPERATIONAL / BUSINESS IMPACT

  • Service outages.
  • Loss of revenue.
  • Damage to reputation.

PREVENTIVE MEASURES / RESPONSES

  • Use of DDoS mitigation services.
  • Traffic filtering and rate limiting.
  • Regular system updates.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Governed by the Cyber Security Act 2024, which mandates compliance for NCII sectors.
  • Non-compliance can lead to fines up to 500,000 ringgit or imprisonment for up to ten years.

THE HOOLIGAN

Like a hooligan, a DDoS attacker causes chaos and disruption, overwhelming systems and services with no intention of directly stealing but instead creating noise and destruction.

RANSOMWARE ATTACK

DESCRIPTION

Ransomware is malicious software that encrypts files and systems, rendering them inaccessible until a ransom is paid.

CHARACTERISTICS

  • Encrypts data and demands payment for decryption.
  • Requires access to internal systems, often via phishing.
  • Typically demands payment in cryptocurrency.

OPERATIONAL / BUSINESS IMPACT

  • Data loss.
  • Operational downtime.
  • Significant financial costs for recovery and ransom payment.

PREVENTIVE MEASURES / RESPONSES

  • Regular backups and disaster recovery plans.
  • Employee training on phishing.
  • Endpoint protection solutions.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Subject to the Cyber Security Act 2024; organizations must notify incidents within six hours.
  • Penalties for failing to report can include fines up to 500,000 ringgit or imprisonment for up to ten years.
  • Subject to the Computer Crimes Act 1997 penalties (fines, imprisonment) could apply for any unauthorised modification of the contents of any computer.

THE KIDNAPPER

Encrypting critical data and demanding ransom mirrors a kidnapper holding a victim hostage for financial gain.

RANSOM DDOS (RDDOS) ATTACK

DESCRIPTION

A Ransom DDoS attack threatens to launch a DDoS attack unless a ransom is paid, without encrypting any data.

CHARACTERISTICS

  • Threatens service disruption rather than data encryption.
  • May follow an actual DDoS attack or be a threat.
  • Payment often requested in untraceable forms like Bitcoin.

OPERATIONAL / BUSINESS IMPACT

  • Service disruption without prior notice.
  • Potential financial losses from ransom payments.

PREVENTIVE MEASURES / RESPONSES

  • Implementing robust network security measures.
  • Monitoring traffic patterns for anomalies.
  • Having an incident response plan in place.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Governed by the Cyber Security Act 2024; compliance with incident reporting is mandatory.
  • Legal repercussions for non-compliance include fines and imprisonment.

THE EXTORTIONIST

The RDDoS attacker threatens service disruption unless a ransom is paid, akin to an extortionist intimidating victims without necessarily carrying out their threat.

PHISHING

DESCRIPTION

Phishing involves tricking individuals into providing sensitive information by masquerading as a trustworthy entity.

CHARACTERISTICS

  • Often conducted via email or instant messaging.
  • Uses deceptive links or attachments.
  • Targets personal and financial information.

OPERATIONAL / BUSINESS IMPACT

  • Financial loss.
  • Identity theft.
  • Loss of trust in digital communications.

PREVENTIVE MEASURES / RESPONSES

  • User education on recognizing phishing attempts.
  • Implementation of email filtering technologies.
  • Multi-factor authentication (MFA). Software updates.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Governed by the Personal Data Protection Act (PDPA) 2010, which requires organizations to protect personal data. Non-compliance can lead to fines up to RM300,000.
  • Subject to Section 17(3) of the Electronic Commerce Act 2006.

THE CON ARTIST

Phishing attackers rely on deception and impersonation to trick victims into revealing sensitive information, much like a skilled con artist manipulates trust to defraud.

SQL INJECTION

DESCRIPTION

SQL Injection involves inserting malicious SQL queries into input fields to manipulate databases.

CHARACTERISTICS

  • Targets web applications with database backends.
  • Can extract, modify, or delete data.
  • Often due to improper input validation.

OPERATIONAL / BUSINESS IMPACT

  • Data breaches.
  • Loss of sensitive information.
  • Potential legal liabilities.

PREVENTIVE MEASURES / RESPONSES

  • Use of prepared statements and parameterized queries.
  • Regular security testing and code reviews.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Subject to the Computer Crimes Act 1997, which criminalizes unauthorized access and data manipulation. Penalties include fines and imprisonment.

THE SAFECRACKER

Exploiting vulnerabilities in databases to extract, modify, or delete data is akin to a safecracker breaking into a vault to steal valuables.

MAN-IN-THE-MIDDLE (MITM)

DESCRIPTION

MITM attacks involve intercepting communication between two parties without their knowledge.

CHARACTERISTICS

  • Can occur over unsecured networks (e.g., public Wi-Fi).
  • Often uses spoofing techniques.

OPERATIONAL / BUSINESS IMPACT

  • Eavesdropping on sensitive data.
  • Data manipulation.

PREVENTIVE MEASURES / RESPONSES

  • Use of encryption protocols (e.g., HTTPS).
  • VPN usage on public networks.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Covered under the Computer Crimes Act 1997; unauthorized interception of communications is illegal. Penalties can include fines and imprisonment.

THE SPY

Intercepting communication and manipulating it without the parties’ knowledge resembles a spy or eavesdropper gathering intelligence secretly.

MALWARE

DESCRIPTION

Malware refers to malicious software designed to harm or exploit any programmable device or network.

CHARACTERISTICS

  • Includes viruses, worms, trojans, ransomware, etc.
  • Can steal data or damage systems.

OPERATIONAL / BUSINESS IMPACT

  • Data loss or corruption.
  • System downtime.

PREVENTIVE MEASURES / RESPONSES

  • Antivirus software deployment.
  • Regular updates and patches.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • The Cyber Security Act 2024 includes provisions against malware distribution; violators may face penalties including fines and imprisonment.

THE SABOTEUR

Malware acts like a saboteur, infiltrating systems and causing damage, stealing information, or corrupting operations from within.

ZERO-DAY EXPLOIT

DESCRIPTION

A zero-day exploit takes advantage of a previously unknown vulnerability before it is patched by developers.

CHARACTERISTICS

  • Highly effective as there are no defenses available at the time of attack.

OPERATIONAL / BUSINESS IMPACT

  • Significant risk as exploits can lead to unauthorized access or data breaches.

PREVENTIVE MEASURES / RESPONSES

  • Timely software updates and patch management practices.
  • The usage of firewalls.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Subject to the Computer Crimes Act 1997; exploitation of vulnerabilities can lead to legal consequences including fines and imprisonment.

THE OPPORTUNIST

Exploiting unknown vulnerabilities before they are patched mirrors an opportunist who strikes when their target is unprepared.

SOCIAL ENGINEERING ATTACK

DESCRIPTION

Social engineering involves manipulating individuals into divulging confidential information through deception.

CHARACTERISTICS

  • Relies on psychological manipulation rather than technical skills.

OPERATIONAL / BUSINESS IMPACT

  • Compromised sensitive information.
  • Financial loss.

PREVENTIVE MEASURES / RESPONSES

  • User awareness training on social engineering tactics.
  • Verification processes for sensitive requests.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Covered under various laws including the PDPA; organizations must safeguard personal data against such tactics. Violations may result in legal action and fines.

THE MASTER MANIPULATOR

Using psychological tricks to gain sensitive information mimics a manipulator exploiting trust and emotions for their gain.

SUPPLY CHAIN ATTACK

DESCRIPTION

Supply chain attacks target vulnerabilities within third party vendors or partners to compromise an organization indirectly.

CHARACTERISTICS

  • Exploits trust relationships between organizations.
  • Can affect multiple entities simultaneously.

OPERATIONAL / BUSINESS IMPACT

  • Data breaches.
  • Operational disruptions.
  • Financial losses.

PREVENTIVE MEASURES / RESPONSES

  • Thorough vetting of suppliers.
  • Continuous monitoring of third-party security practices.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Subject to the Cyber Security Act 2024; organizations must ensure third-party compliance with cybersecurity standards, with penalties for non compliance.

THE SABOTAGE SPECIALIST

Targeting trusted suppliers or partners to indirectly harm an organization is similar to a specialist who infiltrates indirectly to cause systemic harm.

AI-DRIVEN CYBERATTACKS

DESCRIPTION

Cybercriminals use AI tools to automate attacks, create personalized phishing emails, and adapt tactics in real-time.

CHARACTERISTICS

  • Highly sophisticated attacks that evade traditional detection methods.

OPERATIONAL / BUSINESS IMPACT

  • Increased difficulty in detecting threats.
  • Potentially higher success rates for attackers.
  • Rapid pace of the attack poses difficulty to effectively respond.

PREVENTIVE MEASURES / RESPONSES

  • Invest in advanced AI based detection tools.
  • Regularly update security protocols.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • No specific laws yet; however, general cybersecurity laws apply as AI-driven attacks fall under existing cybercrime regulations.

THE HIGH-TECH FRAUDSTER

Leveraging AI for personalized phishing, automation, and real-time adaptability mirrors a high-tech fraudster using advanced tools to outsmart traditional defenses.

DEEPFAKE SCAMS

DESCRIPTION

Deepfake technology creates realistic audio or video impersonations used in scams or social engineering attacks.

CHARACTERISTICS

  • Can convincingly impersonate trusted individuals.
  • Exploits trust within organizations.

OPERATIONAL / BUSINESS IMPACT

  • Financial fraud.
  • Compromised sensitive information.

PREVENTIVE MEASURES / RESPONSES

  • Employee training on recognizing deepfake content.
  • Verification processes for unusual requests.

LEGAL PROTECTIONS / CONSIDERATIONS IN MALAYSIA

  • Not specifically regulated; falls under general fraud laws and PDPA if personal data is involved.
  • Subject to Section 211(1) of the Communications and Multimedia Act 1998 penalties could apply for content which is indecent, obscene, false, menacing, or offensive in character with intent to annoy, abuse, threaten or harass any person.

THE IMPERSONATOR

Creating realistic fake identities to deceive others resembles an impersonator or forger who mimics others for fraudulent purposes.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter