Regulating fake news: The urgency, challenges, and role of the individual

The why, what and how of internet regulations in Southeast Asia and Australia.

By: Andrea Chan

Figure 1: Masquerade scene from The Phantom of the Opera Movie. Image from: MovieWeb

 “Eye of gold. Thigh of blue. True is false. Who is who?”

So begins a stanza in Masquerade, one of the songs in The Phantom of the Opera musical by Andrew Lloyd Webber, as the guests waltz around the Opera House wearing intricately designed masks and flamboyant gowns. It is an overwhelming scene with fast music, voices from all directions and dozens of guests dancing in the security of their anonymity. Despite the grandiosity and exuberance of the scene, it is a scary realisation that the masquerade is analogous to the online world we live in today. The internet has given individuals a voice with the protection of anonymity, fake identities and until recently – a lack of responsibility and accountability for the content being created online.

Figure 2: Rabies phobia sparked by a viral Facebook video leads to thousands queuing up for vaccinations in Phnom Penh, Cambodia. Photograph: Chor Sokunthea, Khmer Times

The need to regulate harmful online content

In the past year, we have seen the danger of online falsehoods and the spread of hate speech, particularly in Southeast Asia and Australasia.

In Cambodia this year, a viral Facebook video of a young girl who died after being bitten by a cat led to panicked demand for rabies vaccinations. Thousands were driven by fear to seek vaccinations, unnecessary in many cases, and the government had to step in to address the situation (Rinith, 2019).

In March, we saw the use of social media platforms to document a massacre in the New Zealand terror attacks. Due to the unregulated streaming of the violent content posted, the video was on Facebook for 17 minutes before it was taken down, which resulted in 300,000 versions being shared on the platform before Facebook managed to stop the subsequent 1.2 million uploads in the next 24 hours. (Waller, 2019) Different versions of the terror attack video were also shared on other platforms like Youtube and Twitter.

Most recently, violent protests broke out in the capital of Indonesia due to the spread of false information after an official election count confirmed President Joko Widodo as the winner. Many of the protesters were supporters of his opponent, Prabowo Subianto. The riots were sparked by false information spread via WhatsApp and other social media channels, including anti-Chinese hoaxes and calls for violence. (Chew & Barahamin, 2019) In March, Reuters found that both the campaigns of Widodo and Prabowo had deployed social media teams to spread propaganda and fake news. The misinformation was used to play on ethnic and religious sentiments and create a divide.  (Diela & Potkin, 2019).

Figure 3: Mock gravestone during a protest against new online media regulations imposed by the government in Singapore. Photograph: VOA News/Reuters

The challenges of implementing Internet regulations

In light of the recent negative events triggered by unregulated content online, both governments and social media platforms like Facebook, Google and Twitter have stepped in with measures to regulate online content; including fake news, hate speech and violent media. However, they have been met with widespread criticism.


In response to the New Zealand terror attacks, Australia recently passed a new law that would criminalize Internet platforms, including social media companies, for failing to remove violent videos and audio. (Paris, 2019). The bill faced criticism relating to ambiguities in the law in defining key terms, such as how fast the removal of content needs to be to qualify as ‘expeditious’, who would bear prosecution and the possibility of the law acting as an incentive for companies to limited freedom of expression to avoid liabilities. (Douek, 2019)

On 8 May 2019, Singapore passed the Protection from Online Falsehoods and Manipulation (POFMA) Bill, after it passed 72-9 in Singapore’s parliament, with the large majority held by the People’s Action Party (PAP). The bill received concerns about the stifling of free speech because the bill gives any minister in the government the power to force “corrections” to online content deemed to be “false.”. Human Rights Watch deputy Asia director, Phil Robertson, also expressed concerns that restrictive content policies might continue to spread across Southeast Asia, especially since similarly controversial laws were recently passed in Vietnam and Thailand. (Russell, 2019)

Social Media Platforms

Governments are not alone in stepping in to regulate internet content. Social media companies whose platforms have been used to spread online falsehoods and hate speech have also contributed to the efforts in managing sensitive content and misinformation online. Facebook, Google, and Twitter have started to suspend fake accounts, monitor sensitive content as well as invest in research and third-party fact-checking services to understand and detect fake information (‘Explained: Fake news in Asia’, 2019).

In particular, Facebook’s founder Mark Zuckerberg acknowledged the need for new regulation in four areas: harmful content, election integrity, privacy, and data portability. He also called for a more standardized approach and suggested that third-party organizations could set standards on harmful content while Internet companies should be accountable for enforcing these standards. (Zuckerberg, 2019) In an article on Facebook’s Newsroom by Guy Rosen, Facebook announced that it just implemented a ‘one-strike’ policy to Facebook Live so that anyone who violates Facebook’s Community Standards will be restricted from using Facebook Live for a fixed period of time. Recognizing the need for partnership between industry and academia, Facebook has also invested in research that studies techniques to detect manipulated media as part of its efforts against fake news (Rosen, 2019).

During an interview with the Washington Post in March, Twitter’s Head of Legal, Policy and Trust, Vijaya Gadde, shared that Twitter is trying to find a way to keep the tweets for their newsworthiness while flagging it so that users are aware that the tweet is in violation of the company’s abuse policies (‘Twitter’s Vijaya Gadde…’, 2019) The company is currently considering labeling such  tweets as well as limiting the visibility of dehumanizing tweets but has not announced details about how or when these new changes would be implemented (Gold, 2019).

In February this year, Google published a white paper on ‘How Google Fights Disinformation’ (Google, 2019) to share their efforts in tackling the spread of misinformation. The efforts include investing in systems and human review teams to determine the intention of content creators to manipulate or deceive users so that Google can reduce such ‘spam’ content as well as providing easy access to context and diverse perspectives to users such as the ‘Full Coverage’ function in Google News and ‘Publisher Context’ on Youtube. In the white paper, Google also detailed its partnerships with various news organizations to support quality reporting and funding researchers in studying issues of disinformation and trust.

The role of individuals in regulating the Internet

Figure 4: How to Spot Fake News Infographic created by International Federation of Library Associations and Institutions (IFLA)

Regulation of the online world at the individual level can be done through simple steps such as using third-party fact-checking websites, like Politifact and, reporting suspicious and sensitive content as well as simply not sharing articles before checking its credibility.

A section on ‘How to Spot Fake News’ on Sewanee University’s website (How to Spot Fake News, n.d.) lists three simple methods: A visual check, a fact check, and a site check. The visual check includes considering if the headlines were intentionally phrased to provoke strong emotions or if they use capitalized letters. These are usually signs that an article might have a bias. If the website layout is poorly made, this might also suggest that the website might not belong to a reputable organization. The fact check involves checking if multiple and reputable sources have also reported on the same issue. If no other sources have not covered the story, it may not be accurate. The site check includes taking note of the author and his/her potential bias and being wary of URLs that are similar to reputable news sources ( instead of .com). Real examples of fake URLs include (Moore & Eribake, 2019) and @BBCBreaking (‘A fake BBC screenshot’, 2019) neither of which belong to the actual BBC News organization..

The impact of one user might seem small in helping to regulate the online world, but all it takes is one report to alert internet platforms and social media companies of potentially harmful content and online misinformation to prevent the posts from spreading further in the online community.

Will you be that one person today?


A fake BBC screenshot wrongly claims that Kenyans must leave the UAE. (2019, May 22). AFP Fact Check. Retrieved from

Chew, A., & Barahamin, A. (2019, May 22). Chinese Indonesians fear mob attacks as anti-China hoaxes spread online. South China Morning Post. Retrieved from

Diela, T., & Potkin, F. (2019, May 24). “We’re not Chinese officers”: Indonesia fights anti-China…  Reuters. Retrieved from

Douek, E. (2019, April 12). Australia’s New Social Media Law Is a Mess. Lawfare. Retrieved from

Explained: Fake news in Asia. (2019, March 06). South China Morning Post.  Retrieved from

Gold, H. (2019, March 28). Twitter is considering labeling Trump tweets that violate its rules. CNN. Retrieved from

Google (2019, February). How Google Fights Disinformation. Retrieved from

How to Spot Fake News. Sewanee: The University of the South. (n.d.). Retrieved from

Moore, M., & Eribake, A. (2019, February 09). Fake news website peddles propaganda under BBC brand. The Times. Retrieved from

Paris, F. (2019, April 04). Australia To Criminalize Failure To Remove Violent Content From Internet Platforms. National Public Radio. Retrieved from

Rinith, T. (2019, March 01). Rabies phobia. Khmer Times. Retrieved from

Rosen, G. (2019, May 14). Protecting Facebook Live from Abuse and Investing in Manipulated Media Research. Facebook Newsroom. Retrieved from

Russell, J. (2019, May 09). Singapore passes controversial ‘fake news’ law which critics fear will stifle free speech. TechCrunch. Retrieved from

Twitter’s Vijaya Gadde says Twitter is working on a way to label tweets that violate terms. Washington Post. (2019, March 28). Retrieved from

Waller, H. (2019, March 18). Live Streaming Delays May Discourage More Viral Massacre Videos. Bloomberg. Retrieved from

Zuckerberg, M. (2019, March 30). Mark Zuckerberg: The Internet needs new rules. Let’s start in these four areas. Washington Post. Retrieved from

More to explore