Tuesday, 18 November 2025

Synthetic Media: Why Malaysia Needs a Targeted Legal Response

 



The rise of synthetic media, defined as AI-generated video, audio, images, and text designed to appear authentic, has introduced a new class of digital threats that are immediate, invasive, and often devastating. Deepfakes, a particularly dangerous subset, use advanced algorithms to fabricate hyper-realistic content with the intent to deceive.

Malaysians have already encountered cloned voices used in scam calls, fabricated videos of public figures promoting fraudulent schemes, and AI-generated pornography used for blackmail.

These incidents undermine personal safety, social and institutional trust, and democratic discourse. The technology is also becoming dangerously accessible.

A recent case involving a student who created obscene synthetic images of a female peer illustrates how quickly young users can move from experimentation to exploitation. As AI tools become more widespread, the risk of casual misuse escalating into serious harm grows exponentially.

The Need for a Dedicated Legal Response Mechanism

Given the urgency and severity of these harms, regulators must adopt a comprehensive and proactive response. What is urgently needed is not just new laws, but a dedicated legal response mechanism within law enforcement (Royal Malaysian Police) or the Malaysian Communications and Multimedia Commission (MCMC). This specialised unit should operate on a 24-hour basis to assist victims, investigate synthetic media crimes, and coordinate rapid takedowns.

Such a unit must be equipped with technical expertise, legal authority, and public accessibility. Victims of deepfake abuse often face delays and confusion when reporting incidents through conventional channels. The longer harmful content remains online, the greater the harm it causes to the individual and the public. A rapid-response division would ensure timely intervention, prevent further dissemination, and restore public confidence in digital safety. In parallel, the response strategy must include public education and media literacy initiatives, beginning at the primary school level.

Laws Directly Focused on Harm

There is also a need to establish laws that directly address these new threats without inadvertently curbing the potential embedded in the technology. India’s proposed Draft Rules offer a useful model by defining synthetic media broadly, but distinguishing malicious deepfakes by their intent to mislead or defraud. This distinction is critical. A blanket regulation of all AI-generated content risks stifling innovation in education, journalism, medicine and the arts. Targeted legislation, by contrast, must focus squarely on demonstrable harm and criminal intent.

The Online Safety Act 2025: An Opportunity for Correction

The Online Safety Act 2025 (the Act), set to take effect in January 2026, represents Malaysia’s most comprehensive attempt to regulate online harm. The Act’s main shortcoming is that it primarily shifts the responsibility of managing harm to service providers without implementing an effective, user-centric safety mechanism.

While the Act’s wide definition of "harmful content" is intended to cover deepfakes, it does not explicitly define or criminalise their malicious creation. The creation of these specific media types is not made an offence under the act itself. Malaysia must seize this legislative opportunity to make three key corrections:

1. Criminalise Malicious Creation at the Source - Malaysia should follow the lead of countries like South Korea, which criminalise the creation and possession of deepfake non-consensual intimate imagery (NCII) from the moment of fabrication. This protects individuals’ digital likeness and consent, addressing gaps in existing laws such as Penal Code Section 292, which focuses narrowly on obscene material.

2. Establish a Dedicated Response Unit  - Effective legislation must be supported by a specialised unit within the police or MCMC, tasked with providing 24-hour assistance to victims of synthetic media abuse. The current provisions of section 16 and Part IV of the Act are document-heavy and procedurally complex, making them more suitable for institutional complainants and not user-friendly enough to encourage individual reporting of harmful content.

3. Guarantee Free Expression and Mandate Transparency - The Act must include an explicit, strong non-censorship clause, as the current Section 13(3) is vague in its prescription to protect free expression by users.

In parallel, platforms should be required to implement transparency measures: mandatory user declarations and algorithmic labelling of synthetic content. If AI-generated media is clearly marked, say, with a non-removable, standardised digital watermark, the public will be able to assess the authenticity of the media. This shifts responsibility to content creators and platforms, reducing the risk of unnecessary government intervention.

Conclusion: Balancing Safety and Liberty

The threat posed by deepfakes undermines trust and corrodes the foundations of democratic discourse by making all media suspect. Deepfakes pose a significant and amplified threat to vulnerable individuals and communities due to factors such as lower digital literacy, potential for severe psychological harm, and the deliberate exploitation of trust.

Malaysia’s legal evolution must reflect a dual commitment: to protect citizens from digital harm and to preserve the constitutional right to free expression. Effective regulation must be precise, transparent, and supported by the institutional capacity of regulators.

By criminalising malicious creation, establishing a dedicated response unit, mandating clear labelling, and reaffirming the right to dissent, Malaysia can strike the right balance by ensuring public safety without sacrificing liberty.

Just as importantly, a legal framework that clearly defines what is prohibited will also create a safe and enabling environment for the legitimate development of synthetic media technologies, in education, accessibility, journalism, and the creative arts.

Clarity in law not only deters abuse but empowers innovation.