Skip to main content

The use of Artificial Intelligence in Mediacorp

The emergence and evolution of Artificial Intelligence in all its forms offers opportunities to enhance creativity, process innovation and improve productivity that will impact every aspect of our work.

We believe the careful and considered use of AI offers an opportunity to strengthen Mediacorp’s ability to fulfil its mission as the national media network. AI has the potential to help our teams work more effectively and efficiently; deepen our engagement with audiences; enhance content creation workflows; and give us new capabilities.

To this end, Mediacorp has deployed AI products and continues to work on a variety of projects to explore the use of AI across our portfolio.

However, we realise AI also poses new risks. These include professional and ethical issues, legal and copyright challenges, as well as risks around misinformation. These risks are real and should not be underestimated. The use of AI must be governed by vision and vigilance.

As such, the use of AI across Mediacorp is governed by an AI Council comprising senior leaders from across the company, armed with several broad principles. These include having effective and informed human oversight to ensure that we do not undermine the trust of audiences. We will be accountable for all content we create.

Our use of AI will always be consistent with Mediacorp’s professional and editorial values. In our newsrooms, the use of AI for ideation, creation, presentation, or distribution will have even more stringent guardrails. This is to ensure that the high level of trust in the accuracy and authenticity of our news reporting and current affairs programmes is never undermined.

To signal that the newsroom will always prioritise and prize authentic, human storytelling, the newsroom has also committed to not clone the voice or likeness of any of its staff, in particular its reporters, correspondents, producers, and presenters. Cloning could open the door to potentially significant risks around disinformation by malicious actors seeking to undermine the trust and credibility that the newsroom has earned with its audiences.

However, for the rest of Mediacorp, for example in our entertainment and lifestyle content and marketing material, there is potential for AI to enhance creativity and productivity.

In such use cases, the audience will be informed in a manner appropriate to the context, where practical, to signal that AI has been used to generate photorealistic elements or audio realistic components in our content. This can include the following icons:

Where disclosure is not practical, Mediacorp will ensure that the AI generated elements, for example in trailers and promotional materials, are not misleading. The same will be expected of our advertisers.

On the broader AI landscape, Mediacorp is acutely aware that AI tools are openly available on the Internet and are increasingly being integrated into tools and software by third parties. Mediacorp is also studying how the inclusion of AI in search engines could impact traffic to our websites, or how the abuse of AI by others could lead to greater disinformation.

We will collaborate with technology companies, government, and other media companies to champion the safe and trustworthy use of AI. To this end, Mediacorp is a member of the InfoComm Media Development Authority (IMDA) AI Verify Foundation, which aims to harness the global AI community to promote best practices and standards for the use of AI.

Aa