AI in the Lawmaker's Seat: Progress or Peril?

Published on 03 May 2025

by Thulasy Suppiah, Managing Partner

The recent announcement that the United Arab Emirates intends to use artificial intelligence (AI) to help draft, review, and even suggest updates to its laws is a truly groundbreaking development. Presented as a world first, this move goes far beyond the global discussion about regulating AI; it steps into the territory of governing with AI, promising huge gains in legislative speed and efficiency.

While the allure of faster, more precise lawmaking is understandable, particularly given the UAE’s projections of boosting GDP and reducing costs, this pioneering approach warrants careful consideration and raises profound questions. The core concern isn’t just about technical accuracy – though experts rightly warn that current AI systems still suffer from reliability issues and can “hallucinate.” It cuts deeper, touching upon the very nature of lawmaking itself.

Firstly, the essential human element risks being sidelined. Lawmaking isn’t merely an exercise in processing data; it involves intricate negotiation, societal debate, compromise, and the embedding of cultural values. Can an algorithm truly replicate the nuances of human deliberation? Will laws significantly shaped by AI command the same legitimacy in the eyes of the public if the human process of debate and drafting is diminished?

Secondly, the risk of manipulation cannot be ignored. AI systems learn from the data they are fed and operate based on the parameters they are given. Whoever controls these inputs – the training datasets, the prioritised principles – could potentially steer legislative outcomes in subtle, perhaps undetectable ways, embedding hidden agendas into the legal fabric.

Furthermore, AI might strive for a level of logical consistency that clashes with the necessary flexibility of human society. Our laws often contain deliberate ambiguities, allowing for interpretation by courts based on evolving norms and specific circumstances. An AI optimising purely for consistency might produce rigid frameworks ill-suited to real-world complexities.

The security implications are also immense. A centralised AI system involved in drafting national laws would inevitably become a prime target for sophisticated cyberattacks. A successful breach could allow malicious actors to influence or corrupt foundational legal structures, potentially causing widespread disruption before being detected.

Finally, there are potential ethical framework conflicts. An AI trained on supposedly “global best practices” or diverse international datasets might inadvertently propose legal concepts or norms that conflict with a nation’s specific cultural identity, religious principles, or local traditions.

For nations like Malaysia, observing this bold Emirati experiment, the path forward requires careful thought. We should certainly embrace AI’s potential to assist governance and make processes more efficient. However, the UAE’s initiative underscores the urgent need for us to develop robust national frameworks before venturing down a similar path. Any integration of AI into critical functions like lawmaking must be governed by stringent ethical guidelines, transparency, and crucially, ensure that the human touch – deliberation, ethical judgment, and final approval – remains central and paramount. Balancing the power of AI with the wisdom of human oversight is key to ensuring technology serves society, not the other way around.

© 2025 Suppiah & Partners. All rights reserved. The contents of this newsletter are intended for informational purposes only and do not constitute legal advice.

More Featured Articles