[Feature Article] The Star Newspaper: AI, Tenders, and the Trust Deficit

AI, Tenders, and the Trust Deficit

Published by The Star on 26 Sep 2025

by Thulasy Suppiah, Managing Partner

Around the world, the conversation about Artificial Intelligence in public procurement is dominated by the promise of efficiency. The focus is on streamlining processes, automating tasks, and achieving significant cost savings. Studies, such as a recent one by Boston Consulting Group, project remarkable outcomes like up to 15% in savings and a significant reduction in human workload. Yet, in our Malaysian context, to focus solely on these benefits would be to miss a far more critical opportunity: leveraging AI as a frontline tool in the battle against corruption.

The timing could not be more urgent. The recent MACC revelation that Malaysia lost RM277 billion over six years, much of it through collusion in public tenders, is a stark reminder of the deep-seated challenge we face. As we grapple with this reality, the small nation of Albania has embarked on a controversial experiment. Faced with its own entrenched corruption, its government has appointed an AI digital assistant to oversee its entire public procurement process, hoping to create a system free of human bias and graft—a move now facing intense scrutiny from technical and legal experts.

The potential benefits of deploying such technology in Malaysia are immense. Imagine an AI system as an incorruptible digital auditor, capable of analyzing thousands of bids simultaneously. It could flag suspicious patterns invisible to the human eye—interconnected companies winning contracts repeatedly or bids that are consistently just below the threshold for extra scrutiny. By ensuring every decision is data-driven and transparent, we could theoretically restore fairness, save billions in public funds, and begin to rebuild the deep deficit of public trust.

However, recent developments show we must proceed with extreme caution. Experts are now questioning the entire premise of an “incorruptible” AI, pointing out that any system is only as good as the data it is fed. As one political scientist warned, if a corrupt system provides manipulated data, the AI will merely “legitimise old corruption with new software.” This also raises a critical question of accountability—an issue so serious it is being challenged in Albania’s Constitutional Court. If a machine makes a flawed decision, who is responsible?

The most prudent path for Malaysia, therefore, is likely not the appointment of a full “AI minister.” Instead, we should explore a more pragmatic, hybrid model. Let us envision AI not as a replacement for human decision-makers, but as a powerful, mandatory tool to support them. Our MACC, government auditors, and procurement boards could be equipped with AI systems designed to act as a first line of defense. This “digital watchdog” could flag high-risk tenders for stringent human review, catching cases that might otherwise be missed due to simple human oversight or inherent bias. Furthermore, its data-driven recommendations would serve as objective evidence of impartiality, making it much harder for legitimate cases to be dismissed due to personal or political agendas.

The unfolding experiment in Albania, with all its emerging challenges, has opened a vital, global conversation. For a nation like ours, which has lost so much to this long-standing problem, ignoring the potential of technology to enforce integrity is no longer an option. It is time to seriously innovate our way towards better governance.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Key Trends in Medicine: AI Powered Healthcare Innovations

Key Trends in Medicine: AI Powered Healthcare Innovations

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Introduction

A shortage of 11 million healthcare workers is expected by 2030, the World Economic Forum reports, but it is hopeful that advances made by artificial intelligence (AI) in healthcare will help bridge that gap. With its ability to ease tasks, summarise large data sets, reduce time and achieve higher accuracy than humans, it is indeed a wonder that adoption of AI by the healthcare sector remained for a long time “below average”. However, as AI gets smarter, and learns better, more and more spaces in healthcare are bowing to automation. Here are some areas in healthcare that are benefitting from the latest AI and digital learning (DL) applications.

Precision Diagnosis

For strokes caused by a blood clot, time is of essence. Doctors would want to know the initial onset time to determine the right treatment.


Researchers from Imperial College London, the University of Edinburgh, and Technical University of Munich have enhanced stroke timing estimation using AI. They trained the algorithm they developed on a dataset of 800 brain scans with known stroke times, allowing the model to independently identify affected regions in CT scans and estimate stroke timing.


The team then tested the algorithm on data from almost 2,000 other patients. The software proved to be twice as accurate as using a standard visual method. The algorithm also excelled in estimating the “biological age” of brain damage, indicating how much the damage has progressed and its potential reversibility.


The research study leader, Dr. Paul Bentley from Imperial College London said, the accuracy of this data will help doctors make emergency decisions to administer the best response in stroke patients.

Higher Accuracy

Healthcare powered by data and smart automation is also helping to reduce misdiagnosis.
Among the most common mistakes made at accident and emergency (A&E) units in the UK, are that as many as 10 per cent of fracture cases are either overlooked or diagnosed late by medical professionals.

This could lead to further injury or harm to the patient, worsening their condition, delaying treatment, and making it harder for hospitals to quickly treat and turnover patients.
The National Health Service (NHS) in the UK has now been given the green light by the National Institute for Health and Care Excellence (Nice) to use AI as a way of improving fracture detection when examining X-rays.
Clinical evidence suggests that using AI may improve detection in scans, compared with a medical professional reviewing on their own, “without increasing the risk of incorrect diagnoses”, Nice reportedly told The Guardian.

Nice says the technology is safe, reliable and could reduce the need for follow-up appointments.

AI-powered Assistance

Imagine if you could avoid long wait hours in crowded rooms just to have your healthcare questions answered by a doctor. How helpful would it be to minimise the number of times you had to pay for ever increasing clinical consultation costs?

AI virtual assistants are the saviour both overworked clinicians and hospital staff as well as anxious patients have been waiting for. They are AI-powered apps that chat with patients, clinicians, and staff by voice or text.

Digital assistants speed up triage, answer patient questions, schedule appointments, and automate repetitive tasks – traditionally tasks that required many hands and great effort. It can even help explain lab results. This frees staff to focus on care, cuts down wait time, and checks costs.

Virtual assistants can present as chatboxes on hospital websites, voice hubs at nursing stations, or prompts on tablets in waiting rooms. In an AI powered chatbox, a patient with an inflamed toe might type in their symptoms, and the assistant flags any danger signs (like a high fever) before suggesting home care or a quick clinic visit. On the admin side, digital assistants sort schedules, handle billing questions, and coordinate referrals.

That the global AI virtual assistant market in healthcare reached USD677.93 million (RM 2,869 million) in 2023 and is estimated to hit USD9295.63 million (RM39339.11 million) by 2030, is testament to its need and demand.

Machine Learning Applications

For many chronic diseases, by the time they present symptoms and the individual goes to the doctor because of an ailment or visible observations, it is often too late.

A new AI machine learning (ML) model can detect the presence of certain diseases before the patient is even aware of any symptoms, according to its maker AstraZeneca.

Using medical data from 500,000 people who are part of a UK health data repository, the machine could predict with high confidence a disease diagnosis many years later.

Slavé Petrovski, who led the research, told Sky News: “We can pick up signatures in an individual that are highly predictive of developing diseases like Alzheimer’s, chronic obstructive pulmonary disease, kidney disease and many others,” he said.

Another example where machine learning has made great strides is a technology developed by IBM Watson Health and Medtronic to continually analyse how an individual’s glucose level responds to their food intake, insulin dosages, daily routines, and other factors, such as information provided by the app user.

For example, are certain foods worsening the patient’s glucose control? Are there particular days or times where a person’s glucose goes high or low? The Sugar.IQ diabetes management application (App) leverages AI and analytic technologies to help people with diabetes uncover patterns that affect their glucose levels. This allows them to make small adjustments throughout the day to help stay on track.

Sugar. IQ provides information that show how lifestyle choices, medications, and multiple daily injections impact diabetes management and the time spent with glucose in the target range. It provides individualised guidance in understanding and managing daily diabetes management decisions, so that people on multiple daily insulin injections have more freedom to enjoy life.

Idiopathic Pulmonary Fibrosis (IPF) is a severe, chronic lung disease that progressively impairs lung function. It affects approximately five million people worldwide with a median survival of only three to four years. Available treatments can only slow its progression, and are unable to halt or reverse the disease.

AI significantly accelerated the drug discovery process for IPF and reduced the timeline from target identification to preclinical candidate selection to just 18 months – a major advancement in the efficiency of pharmaceutical research.

Insilico Medicine used AI-driven algorithms to design Rentosertib to treat IPF. It is the first AI-designed drug – where both the biological target and the therapeutic compound were discovered using generative AI.

Insilico Medicine is now engaging with global regulatory authorities to proceed with further trials aimed to evaluate Rentosertib’s efficacy and expedite its path to regulatory approval. If successful, Rentosertib could become the first AI-discovered therapy to reach patients, potentially transforming the treatment landscape for IPF.

AI is transforming drug discovery, delivery and administration. AI-designed drugs show 80-90 percent success rates in Phase I trials compared to 40-65 percent for traditional drugs. AI based tools such as ML and DL reduce development timelines from more than 10 years to potentially 3-6 years and cut costs by up to 70 percent through better compound selection.

Assisting in Surgical and clinical procedures

It may be too soon to speak of robots performing all the procedures in a surgery, but in operating theatres, AI and robotics are already assisting surgeons to handle surgical instruments, enhance precision, reduce invasiveness, and improve patient recovery.

The emergence of deep neural networks associated with modern computational power has produced reliable automation of certain tasks in medical imaging, including time-consuming and tedious workflows such as organ segmentation. Segmentation produces measurements and automatic extraction of quantitative features, which cannot be performed in everyday clinical practice.

In aortic and vascular surgery clinics, for instance, challenges existed during routine clinical follow-up for abdominal aortic aneurysms (AAAs). Longitudinal comparison of diameter measurements across consecutive tomography angiography (CTA) exams was cumbersome. It required the recall of multiple prior exams from the picture archiving and communication system of the hospital, measuring them, and comparing measures.

Augmented radiology for vascular aneurysm (ARVA) was designed to include automatic fetching of prior CTAs for separate analysis and automatic longitudinal comparison of each aortic segment. The use of cloud-based computing services enables processing of the multiple CTA data sets and the secure return of the report back to the hospital network within minutes. In the hospital, these reports are then automatically identified and placed into the patient’s hospital file or in any review workstation. This saves substantial time in everyday aortic clinic processes.

Early detection of epidemics and its spread

AI and ML technologies can also forecast the onset of certain epidemics and track their global distribution using historical data that is available online, satellite data, current social media posts, and other sources. ProMED-mail, a reporting tool that operates online and keeps track of epidemic reports from around the world, will likely be the best example of a monitor to help check an epidemic before it causes significant harm.

Operation Optimisation of Healthcare systems

According to the National Library of Medicine, a typical nurse in the US devotes 25 per cent of her working hours to administrative and regulatory tasks. Technology may easily replace these tedious operations. Today, hospitals are using AI to predict peak times, improve bed management, and enhance staff scheduling for optimised resource allocation. For example, one hospital used AI-driven predictive models to adjust staffing based on patient volume, reducing wait times and improving patient throughput.

AI models are also being used in emergency departments to predict patient admission rates, reducing bottlenecks and improving care delivery. By forecasting the number of patients arriving at the ED, hospitals can optimise their staff allocation, reduce patient wait times, and provide faster care.

It’s not tech vs. human

While AI is making great inroads in healthcare, the complete replacement of medical professionals in medicine is still a long way off. The need for human interaction in healthcare is likely to keep AI on the sidelines as a complement, rather than a substitute, for doctors.

The Medical Futurist put forward five fundamental reasons why AI won’t replace doctors – and never will.

  • Empathy – A doctor-patient relationship is built on empathy and trust; and listening and responding in a way that helps the patient feel understood. Very few people are likely to trust an algorithm with life-altering decisions. These are qualities that cannot be fully replicated by artificial intelligence.

  • Physicians have a non-linear working method to arrive at a diagnosis – no algorithm or robot can have the creativity and problem solving skills required to arrive at a diagnosis.

  • Complex digital technologies require competent professionals – It is more worthwhile to programme AI with those repetitive, data-based tasks, and leave the complex analysis/decision to the complex human brain.

  • There will always be tasks robots and algorithms cannot perform – like the Heimlich maneuver.

  • It has never been tech vs. human – the goal has always been to use tech to help humans.

Ethical and Regulatory Considerations

Regulating AI in the healthcare sector is proving to be a complex and sensitive challenge. While the benefits of software as a medical device (SaMD) are great, patients still need protection from defective diagnosis, unacceptable use of personal data and bias built into algorithms.

The growing integration of AI and ML in drug development demands proactive management of ethical and regulatory challenges to ensure safe applications.

In response, regulatory bodies like the United States Food and Drug Administration and the European Medicines Agency are actively developing AI safety parameters and promoting diverse population validation, informed by detailed regulatory guidelines for robust, ethical AI technologies.

The FDA’s AI/ML SaMD Action Plan focuses on regulating software as a medical device:

  • Predetermined Change Control Plan (PCCP): Allows for modifications to AI/ML software over time, ensuring continuous monitoring and updates while maintaining safety and effectiveness. The basic idea is that as long as the AI continues to develop in the manner predicted by the manufacturer it will remain compliant. Only if it deviates from that path will it need re-authorization.

  • Good Machine Learning Practices (GMLP): Guidelines to evaluate and improve machine learning algorithms for medical devices.

  • Transparency: Efforts to ensure clear communication about AI-enabled devices to patients and users.

In the United Kingdom, the Regulatory Horizons Council of the UK, which provides expert advice to the UK government on technological innovation, published “The Regulation of AI as a Medical Device” in November 2022. This document considers the whole product lifecycle of AI-MDs and aims to increase the involvement of patients and the public, thereby improving the clarity of communication between regulators, manufacturers, and users.

The National Medical Products Administration (NMPA) of China, which provides regulatory oversight on medical products, published the “Technical Guideline on AI-aided Software” in June 2019. This guideline highlighted the characteristics of deep learning technology, controls for software data quality, valid algorithm generation, and methods to assess clinical risks.

Then in July 2021, the NMPA released the “Guidelines for the Classification and Definition of Artificial Intelligence-Based Software as a Medical Device”, which includes information on the classification and terminology of AI-MDs, the safety and effectiveness of AI algorithms, and whether AI-MDs provide assistance in decision making such as clinical diagnosis and the formulation of patient treatment plans.

Later, in 2022, the Centre for Medical Device Evaluation under the NMPA published the “Guidelines for Registration and Review of Artificial Intelligence-Based Medical Devices”. These guidelines provide standards for the quality management of software and cybersecurity of medical devices taking into consideration the entire product’s lifecycle.

Perhaps the European Union’s AI Act has provided the most stringent standards for regulating SaMDs.

Under the Act, AI systems such as those in AI/ML-enabled medical devices, are classified as “high-risk”. This is the highest risk classification for permitted uses of AI which triggers a cascade of compliance requirements Risk management is the focal point, and is intertwined with the EU MDR risk-management system to identify, evaluate, and mitigate the ‘reasonably foreseeable risks’ that high-risk AI systems can pose to health, safety, or fundamental rights such as privacy and data protection.

The EU AI Act’s extra-territorial reach is akin to the EU General Data Protection Regulation (GDPR), transcending European borders and impacting international AI system providers and deployers. It applies to ‘providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country’ and providers and deployers established outside the EU if ‘the output produced by the system is used in the EU.

Whether any of these regulatory frameworks will actually ensure public trust and compliance while still fostering innovation will depend very much on continuous monitoring and engagement with feedback from all stakeholders including scientists, doctors and patients.

Regulations should be robust and allow for continuous improvement to ensure it achieves its intended purpose.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

[Feature Article] NST & The Star Newspaper: AI’s New Watchdog Role: A Necessary Evil or a Step Too Far?

AI's New Watchdog Role: A Necessary Evil or a Step Too Far?

Published by New StraitsTimes and The Star on 11 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent disclosure by Open AI that it is scanning user conversations and reporting certain individuals to law enforcement is a watershed moment. This is not merely a single company’s policy update; it is the opening of a Pandora’s box of ethical, legal, and societal questions that will define our future relationship with artificial intelligence.

On the one hand, the impulse behind this move is tragically understandable. These powerful AI tools, for all their potential, have demonstrated a capacity to cause profound real-world harm. Consider the devastating case of Adam Raine, the teenager who died by suicide after his anxieties were reportedly validated and encouraged by ChatGPT. In the face of such genuine, actual harm, the argument for intervention by AI operators is compelling. A platform that can be used to plan violence cannot feign neutrality.

On the other hand, the solution now being pioneered by an industry leader is deeply unsettling. While OpenAI has clarified it will not report instances of self-harm, citing user privacy, the fundamental act of systematically scanning all private conversations to preemptively identify other threats sets a chilling, Orwellian precedent. It inches us perilously close to a world of pre-crime, where individuals are flagged not for their actions, but for their thoughts and words. This raises a fundamental question: where do we draw the line? Should a user who morbidly asks any AI “how to commit the perfect murder” be arrested and interrogated? If this becomes the industry standard, we risk crossing over into a genuine dystopia.

This move is made all the more problematic by the central contradiction it exposes. OpenAI justifies this immense privacy encroachment as a necessary safety measure, yet it simultaneously presents itself as a staunch defender of user privacy in its high-stakes legal battle with the New York Times. It cannot have it both ways. This reveals the untenable position of a company caught between the catastrophic consequences of its own technology and a heavy-handed response that flies in the face of its public promises—a dilemma that any AI developer adopting a similar watchdog role will inevitably face.

We are at a critical juncture. The danger of AI-facilitated harm is real, but so is the danger of ubiquitous, automated surveillance becoming the norm. This conversation, sparked by OpenAI, cannot remain confined to the tech industry and its regulators; it is now a matter for society at large. We urgently need a broad public debate to establish clear and transparent protocols for how such situations are handled by the entire industry, and how they are treated by law enforcement and the judiciary. Without them, we risk normalizing a future governed by algorithmic suspicion. This is a line that, once crossed, may be impossible to uncross.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

[Feature Article] The Star Newspaper: Charting a Sustainable Course for Johor’s Data Centre Boom

Charting a Sustainable Course for Johor's Data Centre Boom

Published by The Star on 9 Sep 2025

by Thulasy Suppiah, Managing Partner

The recent stop-work order issued to a data centre project in Iskandar Puteri marks an important inflection point for Johor. Rather than viewing it as a setback, we should see it as a natural consequence of success—a sign that Johor’s ambition to become a regional digital powerhouse is rapidly becoming a reality, and a prompt for us to thoughtfully consider the path ahead.

The state government’s efforts in attracting these high-value investments are commendable, and the scale of development is truly significant. With 13 data centres already operational and another 15 currently under construction in Johor, it is clear these facilities are a cornerstone of the Digital Johor agenda and the Johor-Singapore Special Economic Zone. They promise to create thousands of skilled jobs, spur technological innovation, and solidify Malaysia’s position on the global stage. This economic momentum is vital and should be nurtured.

However, this commendable success naturally brings with it new responsibilities. The concerns raised by the local community in Iskandar Puteri—from environmental disruption to late-night construction—highlight the critical need to create a symbiotic relationship between these large-scale developments and the communities they inhabit. The challenge, therefore, is not one of ambition, but of integration and balance.

In navigating this, we can learn from the diverse experiences of other nations. Ireland, for example, demonstrates the potential pitfalls when infrastructure development and energy planning do not keep pace with the industry’s rapid growth. Its data centres now place significant strain on the national power grid, raising public concerns about energy security and climate goals. On the other end of the spectrum, Amsterdam faced hard physical limits on its land and power grid, forcing a difficult choice to pause new development to prioritize other urban needs.

A more strategic benchmark might be Singapore. After its own moratorium, Singapore re-engaged the data centre market with a clear focus on quality over quantity. By implementing stringent energy efficiency standards, it has strategically positioned itself as a premium destination for best-in-class operators who are aligned with sustainability goals. This approach proves that strong environmental governance can be a powerful competitive advantage, attracting responsible, long-term investment.

For Johor and Malaysia, this moment presents an opportunity to architect a sustainable roadmap for our digital future. The goal should not be to slow down growth, but to steer it in a direction that is both economically prosperous and socially responsible. The government can lead the way by proactively engaging with the developers of all current and future projects, ensuring that clear guidelines for sustainable and community-centric development are understood and implemented from the outset.

By doing so, we can build confidence among both investors and the public. Let us use this opportunity to pioneer a balanced model for data centre development—one that harnesses their immense economic potential while safeguarding our environmental heritage and enhancing the well-being of our communities. This is how we can secure our position not just as a digital hub, but as a model for sustainable digital transformation.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

You Are the Product: How Targeted Ads Became the Most Powerful Tool of Influence in the Digital Age

You Are the Product: How Targeted Ads Became the Most Powerful Tool of Influence in the Digital Age

“It’s not just what you buy — it’s what you think, fear, and believe. And someone paid to shape it.”

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

Introduction: The Hidden Power of the Ad Box

Ads used to sell shoes. Now they sell narratives. The practice of highlighting the features and benefits of products and services to a mass audience has evolved. Enter the age of programmatic marketing where big data, and particularly our data has reshaped how and why advertisers target you and I. Today’s stories are curated to reach us based on our emotional vulnerabilities and individual interests. It reaches us through our personal devices and social media platforms the moment we click on something online. These narratives could be overt or covert, but highly personalised based on analyses of our personal demographics and online footprint, making today’s advertisements a precise and potent tool of influence or exploitation.

In Malaysia, numerous charlatans used Artificial Intelligence (AI) and deepfake to manipulate the image of Datuk Siti Nurhaliza, a local artist with a massive following, to market fraud investments. They also misused the brand identities of trusted online media portals (like The Star and Free Malaysia Today) to scam her followers. One fraudster was even able to imitate her voice and generate fake video calls to tug at the heartstrings of fans, inviting them to invest in the same platform as her.

While the use of big data, visual media and social media platforms to sell narratives have revolutionised branding, there is a dark side to how personal data is being used to psychologically tune and manipulate consumers’ vulnerabilities. On the one side, organisations are under pressure to acquire increasingly detailed information about their consumers, on the other end, ad fraudsters are stealing this information to unethically benefit themselves.

The New Advertising Industrial Complex

Unlike traditional marketing, programmatic advertising relies on real-time insights of consumer online behaviour and interests, to automate precise advertisement space buying on a large scale. Using consumer’s personal information, advertisers are able to get the right brand in front of the right audience at the right time, within seconds. Such software, known as ad-tech (advertising technology) or supply side platform (SSP), can apparently access thousands upon thousands of publishers’ (owners or managers of websites with ad space to sell) sites at once to sell advertising space to the highest bidder.

Here’s what’s happening at the blink of an eye, behind the scenes during each programmatic advertising auction:

Targeting

  • When I visit a website, the publisher’s platform puts the ad space up for grabs. At the same time, the ad-tech software leverages my activity data to match the most suitable ads.

Bidding

  • In milliseconds, the software automatically calculates and places real-time bidding (RTB) for that ad spot based on all the data-surveillance they have derived about me.

Ad Serving

  • The advertiser with the highest bid wins! Their ad instantly appears on my screen.

Optimisation

  • With every impression, advertisers gather performance data to optimise future bids and improve targeting.

All of the above happens within seconds. While advertisers were initially enchanted, the increased dominance of ad space by just a few ad-tech companies raised concerns. Alphabet (Google’s parent company), Amazon and Meta control more than half (55 per cent) of global advertising spend outside China this year, according to Warc’s latest Q2 2025 Global Ad Spend Forecast.

This over-dominance allows Big Tech companies to raise prices, control transparency and what we see online, and limit opportunities to ad space bid winners. But companies are fighting back. Now, ad buyers are looking for SSP’s or ad tech companies that can benefit them in a positive way. Before they sign with a programmatic marketplace operator, they ask a critical question: How much access will their company have to quality ad inventory — and how much exposure do they have to the junk? SSP’s are now under pressure to provide more transparency and accountability, all detailed through structured contracts.

Data Extraction as Default

Every single moment, Apps, social media platforms, our devices and the websites we visit, are gathering data about our online visits, how much time we spend there and the type of device or browser we use. It saves our preferences and personal information, notes our location and what we’ve left in our online shopping cart, then shows us personalised content based on all this data.

Our online activity is usually tracked with a cookie or pixel which identifies us even after we leave the site. Our activity can also be tracked over different internet-connected devices, like our laptop and smartphone.
According to a 2022 study by cybersecurity company NordVPN, on average, a website has 48 trackers. Some sites sell this data to third parties (like Google). Information collected is used to serve more targeted and intrusive ads; some that follow us from website to website.

When a website we visit tracks us, that’s first-party tracking. When a website we visit lets another company track us, that’s third-party tracking.

Third-party tracking companies can track us across most websites visited. For example, if I visited a website about a country I wish to travel to, I might almost immediately see ads suggesting hotel accommodation options while visiting other websites.

Tracking our online footprint has become the default setting, and our consent is often buried deep in fine print. In 2022, NordVPN found that around 30 per cent of third-party trackers belong to Google, 11 per cent to Facebook, and 7 percent to Adobe. As of 2025, Google still has the biggest share of trackers. Thankfully, on the other extreme, several browsers are actively combating third party cookies. Brave, Firefox and Safari have blocked third party cookies by default since 2019, to make our online life more private.

Brave is also the only browser that offers to randomise fingerprint information. Digital fingerprinting is a method to build a profile of me or you based on our system configuration. It can include information about our browser type and version, operating system, plug-ins, time zone, language, screen resolution, installed fonts, and other data. Even when third-party cookies are turned off, sites can still identify us through fingerprinting – a more worrisome concern as this function cannot be removed. Even if we delete our cookies, we can be recognised through our digital footprint.

In 2024, Google announced it will no longer phase out third-party cookies in Chrome. However, it will allow users to make informed choices about their web browsing privacy. Overall, there seems to be pressure for the tracking landscape to change, and hopefully this translates to safer online browsing for all.

Targeted Ads vs Targeted Harm

According to Forbes Magazine, advertisers know that 91 per cent of consumers are more encouraged to purchase when a brand personalises its communication with them. So they build their messaging based on an audience’s demographics — who they are, what they like, where they are located and what they are most likely to purchase. There are key benefits to this approach. It is effective to market products to those most likely to buy them.

For example, let’s say my dad has just retired and is keen to pick up diving. As he searches online to facilitate this new hobby, a retargeting campaign would suggest safety gear, resorts for the best diving experience, diving coaches or a local diving community – most of which turn out to be extremely helpful and provide value to my dad. He might also end up supporting a remote but extremely gifted maker of diving suits.

While targeted ads are the smartest spend in marketing, they can put consumers at risk when the targeting becomes predatory. Scammers can buy our personal data and use it for purposes more devious than targeted ads and advertising campaigns.

  • Financial ads targeting the poor

In 2013, the US Senate Commerce Committee found that data brokers were targeting poor consumers by grouping them based on their financial vulnerability. Among terms used to categorise the poor into subsets were: “Zero Mobility”, “Burdened by Debt – Singles”, “Hard Times”, “Humble Beginnings”, “Very Elderly”, “Rural and Barely Making it”. This data was then used by unscrupulous parties to market risky financial products or illegal loans with high interest rates to those who could least afford them.

What began as personalisation becomes profiling — and often, exploitation.

While some data brokers prohibit customers from misusing personal information to sell debt-related products, there is lack of industry oversight to enforce these contract terms.

  • Investment scams preying on individuals looking for high returns

In 2024, social media scams in Malaysia continued to be a significant issue. The Securities Commission Malaysia (SC) identified social media platforms such as Facebook and messaging apps like Telegram as primary channels for online investment scams. Victims were targeted with unlicensed products and services. In 2024 alone, the Royal Malaysia Police’s Commercial Crime Investigation Department recorded 35,368 online scam cases, resulting in RM1.6 billion in financial losses—accounting for 84.5 per cent of all commercial crimes reported during the year.

There has also been the increased use of deep fake technology to impersonate influential figures such as Datuk Siti Nurhaliza to draw fans into investment scams.

  • Discriminatory ads

A study released in 2019 entitled Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes revealed how the Facebook algorithm could skew the delivery of ads for employment and housing opportunities “along gender and racial lines”, which violates antidiscrimination laws.

  • Predatory ads

Predatory programmatic advertising refers to the unethical or illegal use of automated ad buying and placement techniques to exploit vulnerabilities in individuals. For instance weight-loss ads which target young users, and cosmetic procedures which target women.

  • Filter bubbles

Every day, the content we see and engage with online is increasingly personalised to our interests, preferences and demographic information. Google for instance is excellent in customising our search results based on our location information or past search history. Facebook does the same thing for our News Feed, by analysing which posts, friends, and pages we interact with the most to boost content they believe we will likely engage with. An example of content personalisation and targeted advertising taken to the extreme is the Cambridge Analytica (CA) scandal where millions of US-based voters were targeted for disinformation campaigns and, to some extent, to influence the outcome of the 2016 US election.

When the Ad Becomes the Story

Ads now blur into content itself.

  • Social Media Influencers or Key Opinion Leaders

Indeed the rise of content creation by social media influencers (SMIs) has transformed brand marketing. Influencers who generate attractive content and who are themselves attractive, are highly sought after by brands for paid partnerships. In Malaysia, influencer marketing is particularly effective due to the country’s high social media usage (nearly 90 per cent of the total population of 31 million). According to an article by Bernama, 75 per cent of Malaysians make purchases based on influencer recommendations.

One of the key strengths of Malaysian influencers is their ability to engage authentically with their followers. According to an article by Statista, Malaysian consumers, especially younger audiences, prefer influencers who present relatable and genuine content. This evolving landscape is reshaping brand partnerships, urging companies to focus on authenticity and meaningful interactions to resonate with their target demographics.

The influencer advertising market in Malaysia is projected to grow by 10.79 percent (2024-2028) resulting in a market volume of USD102.30 million (RM431.8 million) in 2028.

Some social media influencers, while endorsing brands, also use their online platforms to promote good causes. Nandini Balakrishnan, a SAYS video producer, is known for promoting body positivity, while Deborah Henry (a Malaysian model, emcee and TV/podcast host) has been highlighting the plight of refugees for over 10 years. Through her influence, she co-founded Fugee.org, a non-profit that helps refugees living in Malaysia through education, advocacy and entrepreneurship.

Unfortunately, there are downsides to online influencers. A recent study by the University of Portsmouth examined the negative impacts some influencers have. The study found that some SMI’s endorse unhealthy or dangerous products such as diet pills, detox teas, and alcohol without full disclosure. Others spread misinformation, encourage unrealistic beauty standards, foster a comparison culture, promote deceptive consumption, and cause privacy risks.

The study found that the use of filtered and curated images by SMIs added to body dissatisfaction, low self-esteem and harmful beauty practices. It also found that influencer-driven content fuelled lifestyle envy and social anxiety, leading to negative self-comparison and diminished wellbeing.

Dr Georgia Buckle, Research Fellow in the School of Accounting, Economics and Finance at the University of Portsmouth, said: “Social media influencers hold immense power over consumer decisions and cultural norms. While they provide entertainment, inspiration, and brand engagement, the unchecked influence of some SMIs can lead to serious ethical and psychological consequences. Our study highlights the urgency for both academic and industry stakeholders to address these challenges proactively.”

According to a study done by Noémie Gelati and Jade Verplancke from Linköping University in Sweden, consumers identify and create links with influencers, driving them to follow influencers’ recommendations. This relationship impacts young consumers on a different level due to their immaturity and lack of understanding about marketing. The study noted that those around 19-24 years old are more prone to follow influencers “Indeed, (young) followers tend to purchase what the persons they idealise use or wear… Clothing, make-up and even cosmetic surgery, followers aspire to look like their favourite influencers and the beauty ideal they diffuse.”

Finally, influencers themselves are often under immense pressure to produce captivating content which strikes a delicate balance between authenticity and market appeal. This requires a lot of thought and special skill, often leading to stress and burnout. Additionally, the work requires them to maintain a certain public image, increasing the strain to their mental well-being.

  • Brand memes

Corporate memes are another powerful tool for brands looking to connect with younger demographics in a more casual and relatable way. According to Twitter, tweets with images receive 150 per cent more retweets than text-only posts, while meme content specifically tends to generate 60 per cent higher engagement rates compared to standard branded content.

  • Algorithms prioritise ‘engaging’ content

Social media algorithms use engagement, relevance, and user behaviour to determine which posts appear in our feeds. High engagement signals that content is valuable, increasing its visibility. While these systems are designed to enhance the user’s experience and engagement, they often unintentionally create an echo chamber. Users who follow unethical influencers can end up seeing more unethical or misleading content. Some algorithms can amplify extremist propaganda and polarising narratives. These amplifications can lead to societal divisions, promote disinformation, and bolster the influence of extremist groups. Often these types of content use emotionally provocative or controversial material and by focusing on metrics such as “likes” and “shares”, algorithms create feedback loops that take users down a rabbit hole.

AI: The Engine Behind the Curtain

AI is no longer just a backend efficiency tool — it is the central nervous system of modern advertising.

  • Machine learning determines which ad you see and when.
  • Reinforcement learning constantly tests variations to see what you click, skip, or share.
  • Generative AI personalises ad copy, images, and tone in real-time based on your digital behaviour.
  • Platforms use AI-driven predictive models to infer your mood, political leanings, spending habits — even when you’re most likely to be impulsive.
  • Instead of marketers carving out segments they think are best for an ad campaign, the AI discovers these optimal audiences automatically.

Through AI advertising tools like Performance Max within Google and Meta’s Advantage+, tech giants like Google, Meta and LinkedIn remove much of the detailed work involved to manually match a brand’s target persona. Instead of marketers carving out customer segments considered best for the campaign, the AI discovers these optimal audiences automatically and generates personalised ads with every click. So it isn’t just targeting. It’s automated persuasion at scale — invisible, relentless, and largely unregulated.

With nearly 3.4 billion people using Meta’s apps (Facebook, Instagram and WhatsApp) each day, the company has massive amounts of data on the human population.

According to MarketBeat, an Inc. 5000 financial media company, Meta’s Advantage+ Shopping saw rapid adoption out of the gate. In initial testing, the company said Advantage+ users were seeing a 32 per cent increase in return on advertising spend (ROAS) compared to its non-automated campaigns. In April 2023, nine months after its release, daily revenue from Advantage+ Shopping campaigns increased by 600 per cent in just six months.

By the third quarter of 2023, Advantage+ Shopping was generating USD10 billion (RM42.21 billion) in annual run-rate revenue and by the fourth quarter of 2024, Advantage+ Shopping campaign revenues scaled past USD20 billion (RM84.42 billion) in annual run-rate. They also grew 70 percent from Q4 2023.

Meanwhile, Google has seen a 93 per cent adoption rate of Performance Max among retailers running Google shopping ads.

Who Regulates the Algorithm?

There is yet no clear legal framework internationally, much less in Malaysia, to oversee how ads are targeted or how profiling works.

However, Alex C. Engler, a Fellow in Governance Studies at The Brookings Institution, says this does not mean regulators should sit idly by. Instead they should actively study algorithmic systems in their regulatory domain and evaluate them for compliance under existing legislation.

He notes that some regulatory agencies have started this work, including the U.S. Federal Trade Commission’s (FTC) Office of Technology and Consumer Financial Protection Bureau (CFPB), new algorithmic regulators in the Netherlands and Spain, and online platform regulators such as the UK’s Office of Communications (OFCOM) and the European Centre for Algorithmic Transparency.

Engler further suggests that as oversight agencies gather information about algorithmic systems, their societal impact, harms, and legal compliance, they should also develop a broad AI regulatory toolbox for evaluating algorithmic systems, particularly those with greater risk of harm.

This toolbox he says, should include means to expand algorithmic transparency requirements, perform algorithmic investigations and audits, develop regulatory AI sandboxes, and to welcome complaints and whistle-blowers.

Malaysia: Reclaiming Digital Autonomy

Although Malaysia has the Personal Data Protection Act (PDPA) 2021, the PDPA does not explicitly define any minimum standard for consent. It also does not regulate online privacy and has no provision on e-marketing, cookies or newer tracking and surveillance technology, such as geotagging. It also does not apply to personal data processed outside Malaysia. However, it is considered best practice for organisations operating in Malaysia to obtain informed consent from users for the use of cookies on websites, especially if they are collecting personal data. Companies that do not provide a cookie consent mechanism run the risk of non-compliance to the PDPA.

Fortunately, the Personal Data Protection Commissioner (PDPC) is considering issuing a data protection guideline that covers digital marketing.

The PDPC can learn from EU’s General Data Protection Regulation (GDPR) which regulates targeted advertising more stringently – mandating less intrusive advertising that uses less consumer data. It requires consumers to take positive action to provide consent, either by signing a form or clicking ‘I consent’ or ‘I agree’.” The GDPR defines consent as “being freely given, specific, informed, and unambiguous and given by a clear affirmative action”. Malaysia can definitely start by taking a leaf out of these EU guidelines.

By filling these gaps, Malaysia has an opportunity to lead the region by adopting clearer consent rules or stronger transparency standards.

Meanwhile, as consumers we should not be content to be sitting ducks. We need to understand and limit how we are profiled, and how much permission we surrender through our Apps and social media settings. We should proactively review and adjust our privacy settings to control who can view our posts, profile information, and activity.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

AI, Deepfakes, and the Right to Your Digital Selves

AI, Deepfakes, and the Right to Your Digital Selves

by Thulasy Suppiah, Managing Partner

As societies globally grapple with the disturbing rise of AI-generated deepfakes, a challenge highlighted by recent incidents abroad and here in Malaysia, Denmark has just proposed a groundbreaking solution that demands our attention. The Danish government plans to amend its copyright law to give every individual the right to their own body, facial features, and voice. This is a profound and necessary step in protecting human identity in the digital age.

For too long, the debate around deepfakes has been framed primarily as an issue of privacy or harassment, often placing a heavy burden on victims to prove harm after their likeness has been violated and spread across the internet. This new approach fundamentally shifts the paradigm. By treating a person’s identity—their face, their voice—as a form of personal intellectual property, it grants them a clear right of ownership.

This is not merely a subtle legal change; it is a game-changer. It means a victim would no longer need to prove reputational damage or malicious intent, which can be difficult and retraumatising. Instead, the case becomes a simpler one of unauthorised use of their “property.” This empowers the individual with a powerful legal shield and a direct path to demand removal of content and seek compensation.

Crucially, such a framework also establishes clear accountability for the tech platforms where this content proliferates. By outlining significant consequences for non-compliance, it sets clear legal and financial expectations for social media and messaging companies. This effectively transitions the responsibility from a reactive content moderation process to a proactive legal obligation, creating a clear imperative for them to prioritise the swift handling of non-consensual deepfakes.

While our authorities are rightly using existing laws like the Communications and Multimedia Act to prosecute perpetrators, these are often reactive measures. The kind of proactive governance being proposed in Denmark anticipates the inevitable misuse of rapidly advancing AI and creates a robust defence before the next wave of more realistic and accessible deepfake tools becomes available. It’s an attempt to legislate for the world we are entering, not the one we are leaving behind.

Of course, any such law must include exceptions for satire and parody to protect free expression. But the core principle remains: your digital likeness belongs to you.

As Malaysia continues its journey into the digital economy, we must consider if our own legal frameworks are truly fit for the AI era. The Danish model offers a compelling vision for how to restore digital autonomy and protect the dignity of our citizens. It sends an unequivocal message that a person cannot simply be run through a digital copy machine for any purpose, malicious or otherwise, without their consent. It is a thought-provoking and essential conversation we need to have now.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Featured Article: The World’s Rebalancing Act: Malaysia’s Moment to Shine

The World's Rebalancing Act: Malaysia's Moment to Shine

Published by The Star on 6 Mar 2025

by Thulasy Suppiah, Managing Partner

The global economic landscape is undergoing a profound transformation, driven by geopolitical realignments, most notably the US-China tech rivalry, and a widespread corporate imperative to ‘de-risk’ and ‘decouple’ supply chains. In this shifting terrain, Malaysia has admirably positioned itself as a stable and attractive hub for foreign direct investment (FDI). Microsoft’s recent reaffirmation of its substantial RM10.5 billion investment in cloud and AI infrastructure here, despite global pullbacks elsewhere, is a powerful testament to this trend and a vote of confidence in our nation’s potential.

This ‘flight to safety’ or search for strategic alternatives by multinational corporations (MNCs) presents a golden opportunity for Malaysia. We are currently benefiting as companies seek to diversify their operations and mitigate risks associated with over-concentration in any single market, particularly in light of ongoing trade disputes, semiconductor export controls, and vulnerabilities exposed by past global disruptions.

But this favourable tide is not self-sustaining. The very forces that benefit us today – trade tensions, potential tariffs, and shifting alliances – create an inherently volatile environment. To ensure Malaysia not only attracts but also retains high-quality FDI and solidifies its position as a key player in the global economy for years to come, we must adopt proactive and far-sighted strategies, rather than merely reacting to external pressures.

Firstly, strengthening our domestic fundamentals is non-negotiable. This means aggressive investment in a future-ready workforce through upskilling and reskilling initiatives, particularly in high-tech sectors like AI and advanced manufacturing. We need to cultivate a generation that are not just consumers of technology but creators and innovators. Continuous upgrades to our digital and physical infrastructure, including sustainable energy solutions for power-hungry data centres, are also paramount.

Secondly, our policy and regulatory environment must be a hallmark of stability, clarity, and adaptive agility. Predictable long-term policies, a streamlined bureaucracy that champions ease of doing business, and transparent enforcement are critical. Our regulatory frameworks must be robust enough to ensure good governance but flexible enough to accommodate and encourage innovation, being responsive to the needs of a rapidly evolving global economy.

Thirdly, a concerted effort to move Malaysia up the global value chain is essential. This involves strategically fostering indigenous innovation and attracting investments that bring not just capital, but also cutting-edge technology, R&D activities, and opportunities for local SMEs to integrate into sophisticated global supply chains. Focusing on niche specialisations where Malaysia can build a distinct competitive advantage will be key.

Finally, our international engagement and trade diplomacy must be astute and proactive. We need to continuously champion Malaysia as a reliable, neutral, and pro-business partner on the global stage, strengthening beneficial trade agreements and maintaining open dialogues with MNCs to understand their long-term strategies and concerns.

Malaysia currently finds itself in an enviable position, benefiting from global economic restructuring. However, this is not a moment for complacency but for concerted, strategic action. By building on our current strengths and proactively addressing future challenges, we can ensure Malaysia is not merely a beneficiary of transient global shifts, but a resilient and proactive architect of its own enduring economic prosperity.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

Why AI’s Next Leap in Southeast Asia May Begin in Malaysia

Why AI’s Next Leap in Southeast Asia May Begin in Malaysia

Scaling in ASEAN isn’t plug-and-play. Here’s what works

By Thulasy Suppiah, Managing Partner of Suppiah & Partners

ASEAN Is the Next AI Frontier — with young digital natives exploding growth in data consumption, businesses surging cloud computing demand, and governments actively promoting digitalization. 40 percent of governments are already implementing national cloud adoption strategies. These trends are driving the region – with its 680 million population – to undergo rapid digital transformation. With demand rising for digital and data infrastructure, ASEAN has become a critical hub for technology, connectivity and data-driven growth. Although underdeveloped in terms of AI penetration, the World Economic Forum notes that this region is more interested in AI’s potential benefits and less concerned with its risks, reflecting a culture of acceptance and exploration. It also noted that strong government support and investment are fostering AI adoption and deployment, with several countries already developing their own national AI frameworks. At the same time, funding for research and development has increased, and regulatory sandboxes allow for experimentation.

What AI Firms Need to Scale in ASEAN — and Where They Struggle

In the first half of 2024 alone, more than USD30 billion (almost RM128 billion) was committed to building AI-ready data centers across Singapore, Thailand, and Malaysia; laying the foundation for accelerated computing, AI services and data growth. It has positioned the region for long-term success. Opportunities abound for language AI, logistics, fintech, and public sector applications. Many ASEAN countries are still in the “greenfield” stage of AI adoption, so early movers here will have an advantage.

However, it would be wrong to assume that one playbook works across all the borders. AI firms looking to expand in ASEAN should understand that each country has different laws, diverse data sovereignty baselines and no common AI standard. Expansion in this region will require planning, not brute force.

- Start by understanding unique local industry objectives

While ASEAN’s diverse user base provides for unique product market-fit opportunities, it also means that ICT providers must co-develop industry-specific solutions, strengthen ecosystem collaborations, and drive data-led sales and commercial strategies before businesses can unlock the full potential of their digital transformation efforts. Companies should not just push for technology adoption but align AI initiatives with core objectives of local businesses. They should identify specific challenges and deploy AI solutions strategically, for businesses can unlock new levels of efficiency, productivity and innovation. Success hinges on a strategic approaches, focused on creating tangible value.

- Pivoting and Adding Value Where There are Constraints

There is high friction entering certain countries in ASEAN. Firstly, the AI expertise pool is still limited and unequally distributed. There is a shortage of skilled personnel across the AI spectrum, from machine learning engineers, data scientists and professionals in algorithm development and those able to critically evaluate AI solutions.

Secondly, the investment cost of implementing AI solutions remains expensive, if considering changes and transitions from legacy applications. Current cost structures often reflect US, UK and Japan’s benchmarks while local AI companies tend to cater to clients in developed regions due to higher margins. This limits their resources for domestic projects. This means that some ASEAN nations will require subsidies to see return on investments (ROI).

Thirdly, typical Initial Setup Costs for AI Projects, especially large scale implementations involve substantial write-off costs, anywhere from USD5,000 to USD500,000 (RM21,274 to RM2.12 million) depending on tools, machine learning models, data security, and requirements for hardware. This leads some companies to favour smaller-scale data analytics solutions.

Fourth is the ethical concern of job displacement. While AI could increase productivity, it will also disrupt the workforce significantly. McKinsey estimates the loss of 23 million jobs to automation by 2030. New jobs will replace old jobs. How will the job market shape up? How will governments cope with unemployment? These considerations may delay decisions around AI projects.

Finally, legal uncertainty in some markets causes deployment paralysis. AI-specific regulations are either absent or under discussion in many ASEAN countries. Current regulations are mostly focused on the existing Personal Data Protection Act (PDPA) based on the European General Data Protection Regulation (GDPR) model. The localisation of these could take more than three years, with further delays during election cycles.

Malaysia vs. Singapore: Overflow Strategy in Action

EY ASEAN’s Joongshik Wang, remarked, “While enterprises (in Singapore) are keen to invest in emerging technologies, many businesses struggle to bridge the gap between pilot and full-scale implementation. This is often due to integration challenges, lack of clear return on investment (ROI), and the need for stronger ecosystem support to drive business value.” While Singapore boasts the most robust legal framework, it lacks in other areas.

Higher costs for utility, employment and land in Singapore, also make it less attractive for AI infra buildout. But on the other side of the straits, along with high quality and ubiquitous connectivity, Malaysia has larger tracts of land available, a cheaper workforce and the necessary infrastructure. It also has the physical space for data centers and manufacturing parks to scale.
While Singapore is a great environment for firms to set up their headquarters, Malaysia is an excellent cost-effective complement to Singapore’s HQ functions.

Why Malaysia? It’s ASEAN’s Shortcut for Getting Things Right

As ASEAN Chair, Malaysia is in an enviable position to lead the digital transformation of the region. Benefiting from its close proximity to Singapore, along with ample power and water resources, Johor (a state in Malaysia’s southern tip) has in recent years attracted major hyperscalers including Microsoft, Equinix and NTT; with Stack’s 220-megawatt facility being the latest. Other hyperscalers such as Amazon Web Services, Google, Alibaba, and Huawei also have a firm footing here.

Their presence is evidence of the trust they have in Malaysia’s resources and connectivity, high quality ports and cloud zones, and dependable infrastructure and logistics.

- Malaysia Actively Promotes and Welcomes AI Collaboration Initiatives

At a recent “Strengthening ASEAN-China Cooperation” forum, Chairman of the Centre of Regional Strategic Studies (CROSS), Lee Chean Chung said that Malaysia is well positioned to lead the regions digital transformation. “Malaysia’s strategic location, diverse and multilingual talent pool, robust infrastructure and collaborative mind-set make it a natural hub for AI development in the region.”

CROSS is already actively promoting AI policy development and facilitating regional cooperation. By encouraging policy frameworks supportive of responsible AI development and deployment, Malaysia is helping to shape a future where AI drives economic growth and fosters shared prosperity and equity.

Through CROSS, Malaysia has promoted forward looking AI policy development and cooperation. It has encouraged the establishment of joint ASEAN-China AI research centers, cross border innovation hubs and regional talent development programmes.

Malaysia is also visionary and focused, able to articulate its need for on the ground support to ensure a long term globally ready workforce. In this, Lee hoped ASEAN and China will collaborate to invest in science, technology, engineering and mathematics (STEM) education, launch AI fellowship programmes, and expand youth exchange initiatives.

- Malaysia’s Digital Government Is Laying Serious Groundwork

Beyond just policy and ideas, the Malaysian Government has set an example by driving its own AI adoption and readiness. Malaysia’s Digital Ministry has harnessed generative AI under its five year AI technology plan. Recently 445,000 public officers were given access to Google Workspace’s latest generative AI capabilities to scale up AI adoption across the civil service and enhance government service delivery. The first phase of the programme, AI at Work, was introduced in December 2024 alongside the launch of Malaysia’s National AI Office (NAIO). As the central authority to champion Malaysia’s AI agenda, the NAIO is a further show of Malaysia’s commitment to position the country as a regional leader in AI technology and applications.

- Malaysia’s Existing Industries create real AI demand

Malaysia’s manufacturing sector (with an expected GDP contribution of RM587.5 billion by 2030), has been actively integrating AI in automation, logistics and quality control.

For instance, SMART Modular Technologies (SMART), a global leader in specialty memory and storage solutions, uses AI-powered high-speed precision industrial robots at its Malaysian facility to identify and isolate manufacturing defects.

Another example is KVC, a leading B2B distributor of electrical products, solutions, and related services. It has leveraged IBM Robotic Process Automation (RPA) to enhance the finance department’s Procure-to-Pay processes, improving operational efficiency, reducing errors and lowering costs by automating key tasks such as invoice extraction, matching, and accurate payment processing; accelerating workflow and driving efficient financial management.

Similarly, the retail and food and beverage (F&B) sector are also using AI. From marketing to inventory management, and analyses of past sales to predict demand and seasonal trends more accurately, restaurants can now order just the right amount of stock to reduce waste and protect profit margins.

Some F&B businesses are even using AI to test new flavour combinations, cutting down on research and development (R&D) time.

Customer service in Malaysia has evolved too. Many online retailers like Zalora use AI chatbots to answer FAQs, and even suggest popular add-ons based on customer preferences.

Nevertheless many businesses, especially the Small and Medium Enterprises in Malaysia, struggle to bridge the gap between pilot and full scale implementation.

Malaysia’s Legal Framework Isn’t a Hurdle — it’s a Filter

Malaysia has been one of the first countries in the region to have adequate legal frameworks in place to regulate the use and misuse of internet technology and continues to draw up new laws accordingly.

The recently established Data Sharing Act 2025 establishes a legal framework for data sharing between government-to-government public sector agencies and between government agencies and businesses. It aims to improve government efficiency, enhance transparency, and ensure data security. The Act also seeks to protect sensitive and confidential information, strengthening data security through structured, and accountable data governance. It proves Malaysia’s recognition that data continues to drive decision-making and digital transformation, and its commitment to navigate this new digital regulatory environment effectively.

Malaysia was also among the first ASEAN nations to establish the Personal Data Protection Act (2010) to regulate the processing of personal data in commercial transactions by Data Users and protect the interests of Data Subjects.

The government has also issued the National Guidelines on AI Governance & Ethics (AIGE) which outlines the obligations of end users, policymakers and developers. Although not legally binding, it proposes seven core principles which are: Fairness; Reliability, Safety, and Control; Privacy and Security; Inclusiveness; Transparency; Accountability; and Pursuit of Human Benefit and Happiness.

While there is no dedicated AI law yet, an AI Bill is in the works. But Malaysia’s policy environment is made predictable and stable through its existing Local Government, Intellectual Property, Contracts and Employment laws.

Malaysia also has skilled local legal navigators present to help AI firms avoid missteps in interpretation and execution of Malaysia’s regulatory framework.

Conclusion: Malaysia Doesn’t Replace ASEAN — It Unlocks Its Pros:

Firms entering Malaysia gain operational clarity and regional access. It’s where complexity becomes manageable — and strategy beats speed.

As a start, Malaysia is definitely looking for firms able to deploy capable AI solutions with lower burn rates and faster iteration cycles. Those deploying affordable, efficient AI stacks (e.g. from China) can leapfrog into ASEAN from here. Chinese technology companies such as Baidu, Alibaba and Tencent have been active in developing open-source AI models for many years. Their strategy, supported by Chinese universities and the government, can be seen as an “open innovation” model aimed at accelerating research and development and leapfrogging past the US. The fact that high-quality open-weight Large Language Models (LLMs) are now available means that Malaysia can access them at far lower cost than before and can now run its own LLMs without having to transfer sensitive data to commercial third parties or foreign countries, giving it greater data autonomy.

- Success Requires a Local Partner and a Legal Firm to Navigate and Execute

Successfully setting up in Malaysia will also require a local partner, good legal counsel and a local IT firm for execution. As the regulatory environment is constantly in flux, legal firms with full understanding of local laws are a crucial first step for AI firms entering Malaysia to make vital decisions. With AI is already having an impact on everything from risk assessment and insurance underwriting to policies and claims processing, AI firms should look for attorneys who are also competent and current with the latest developments in AI and technology to help them find their firm footing in Malaysia, and beyond to ASEAN.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Newsletter

[Feature Article] The Star Newspaper: Malaysia’s New Data Act: High Hopes, High Stakes

Malaysia's New Data Act: High Hopes, High Stakes

Published by The Star on 8 May 2025

by Thulasy Suppiah, Managing Partner

The recent enactment of the Data Sharing Act 2025 marks a significant step in Malaysia’s digital journey. The potential benefits are clear: enhanced public services through better agency coordination, data-driven decision-making, and a vital boost to our burgeoning AI ecosystem, aligning with the MADANI government’s aspirations. Creating a legal framework for inter-agency data sharing is indeed necessary.

However, as this Act takes its first steps, its success hinges critically on more than just legislative intent. For the public, the promise of efficiency must be balanced with robust assurance of security. We cannot overlook the context of past incidents involving significant leaks of Malaysians’ personal data allegedly linked to government systems. This history naturally fuels public apprehension.

It’s crucial to remember that the Personal Data Protection Act (PDPA) 2010 does not apply to federal or state governments. Therefore, the safeguards, evaluation criteria, and oversight mechanisms embedded within this new Data Sharing Act carry immense weight – they are the primary line of defence governing how citizen data is handled between government bodies.

While the establishment of the National Data Sharing Committee is welcome, its effectiveness will depend entirely on rigorous implementation and strict adherence to protocols. Simply having an Act is insufficient; the underlying cybersecurity infrastructure across all participating agencies must be demonstrably strong and resilient against breaches. Public confidence needs to be earned, not assumed.

Therefore, alongside implementing this Act, there must be a transparent commitment to significantly upgrading government digital infrastructure and cybersecurity capabilities. Assurances must be backed by visible action.

The Data Sharing Act 2025 provides a foundation. Now, the hard work begins: building a secure, trustworthy system that delivers the promised benefits without compromising the personal data Malaysians entrust to the government. Its success will ultimately be measured not just by shared data points, but by the public’s confidence in its protection.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles

AI in the Lawmaker’s Seat: Progress or Peril?

AI in the Lawmaker's Seat: Progress or Peril?

Published on 03 May 2025

by Thulasy Suppiah, Managing Partner

The recent announcement that the United Arab Emirates intends to use artificial intelligence (AI) to help draft, review, and even suggest updates to its laws is a truly groundbreaking development. Presented as a world first, this move goes far beyond the global discussion about regulating AI; it steps into the territory of governing with AI, promising huge gains in legislative speed and efficiency.

While the allure of faster, more precise lawmaking is understandable, particularly given the UAE’s projections of boosting GDP and reducing costs, this pioneering approach warrants careful consideration and raises profound questions. The core concern isn’t just about technical accuracy – though experts rightly warn that current AI systems still suffer from reliability issues and can “hallucinate.” It cuts deeper, touching upon the very nature of lawmaking itself.

Firstly, the essential human element risks being sidelined. Lawmaking isn’t merely an exercise in processing data; it involves intricate negotiation, societal debate, compromise, and the embedding of cultural values. Can an algorithm truly replicate the nuances of human deliberation? Will laws significantly shaped by AI command the same legitimacy in the eyes of the public if the human process of debate and drafting is diminished?

Secondly, the risk of manipulation cannot be ignored. AI systems learn from the data they are fed and operate based on the parameters they are given. Whoever controls these inputs – the training datasets, the prioritised principles – could potentially steer legislative outcomes in subtle, perhaps undetectable ways, embedding hidden agendas into the legal fabric.

Furthermore, AI might strive for a level of logical consistency that clashes with the necessary flexibility of human society. Our laws often contain deliberate ambiguities, allowing for interpretation by courts based on evolving norms and specific circumstances. An AI optimising purely for consistency might produce rigid frameworks ill-suited to real-world complexities.

The security implications are also immense. A centralised AI system involved in drafting national laws would inevitably become a prime target for sophisticated cyberattacks. A successful breach could allow malicious actors to influence or corrupt foundational legal structures, potentially causing widespread disruption before being detected.

Finally, there are potential ethical framework conflicts. An AI trained on supposedly “global best practices” or diverse international datasets might inadvertently propose legal concepts or norms that conflict with a nation’s specific cultural identity, religious principles, or local traditions.

For nations like Malaysia, observing this bold Emirati experiment, the path forward requires careful thought. We should certainly embrace AI’s potential to assist governance and make processes more efficient. However, the UAE’s initiative underscores the urgent need for us to develop robust national frameworks before venturing down a similar path. Any integration of AI into critical functions like lawmaking must be governed by stringent ethical guidelines, transparency, and crucially, ensure that the human touch – deliberation, ethical judgment, and final approval – remains central and paramount. Balancing the power of AI with the wisdom of human oversight is key to ensuring technology serves society, not the other way around.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles