The growing need for Content Moderation

By the time you finish reading this blog, about 1,200 hours of video will have been uploaded to YouTube. That’s right – enough professional news reports, amateur dance routines and cats that make you laugh out loud to keep you occupied for 50 days. Some will go viral, some will sit dormant and almost all will invite mostly anonymous viewers to publicly post whatever they want about the content or any other topic they wish to comment on.

M_BlogT_Content moderation blog

In an ideal world, every one of those comments would be reviewed before they appeared online. Misogynistic, racist and homophobic posts would be rejected. Embedded links to child pornography and terrorist recruitment sites would never see the light of day. The internet would be what its founders likely dreamed it would become – an open platform that promotes positive community building.

But this is not an ideal world. It is one where the world wide web contains at least 3.5 billion pages, 300 hours of video are uploaded to YouTube every minute and almost 5 billion videos are watched on the platform every day. It is simply impossible to monitor every piece of user-generated content that is uploaded to the internet and that creates an enormous headache for the countless companies that have an online presence.

The headache remedy? Content moderation.

What is content moderation?

Defined as “the practice of monitoring submissions and applying a set of rules that define what is acceptable and what is not”, content moderation is an essential tool for businesses operating in the online space. From encouraging customers to share product reviews on external websites to inviting material to be posted on one’s own social media pages, user-generated content can be hugely beneficial if it is constructive, engaging and ideally positive.

Of course it is not always so, as evidenced by the fact more than 30 million videos were removed from YouTube in 2019 for violating its community guidelines, while the same year Google Maps detected and removed more than 75 million policy-violating reviews and 4 million fake business profiles. Even the smallest of brands risk negative fallout if they do not take adequate steps to moderate user-generated content, with specific areas of concern including:

  • Brand reputation: it is almost inevitable that a business will be the focus of negative or inappropriate user-generated content at some point. From forums and social media posts to product reviews and testimonials, it is vital brands do their best to monitor and filter content to promote material that enhances their reputation and prevent critics and trolls from doing the reverse.
  • Growth: attracting additional traffic to an organization’s website or social media channels can play a key role in increasing customer engagement and improved search engine rankings. That is why it is essential to foster an online community where people have a positive experience as opposed to one where negative content fuels a negative environment.
  • Customer insights: along with negating harmful material, content moderation can allow businesses to gain insights into users’ behaviors and opinions that in turn can drive strategies, influence decision-making and more easily identify opportunities.

Types of content moderation

For anyone still in doubt about the importance of content moderation, consider that management consulting firm Everest Group is tipping the sector to reach up to U.S.$6 billion by 2022. That is a lot of content moderation, with one of five styles typically used to identify concerning material or posts:

  1. Pre-moderation: content requires a tick of approval before ‘going live’. This is the best safeguard against legal or reputational risk but a time-consuming and laborious process that can delay the appearance of content or comments and, in turn, prove frustrating for users and reduce traffic numbers.
  2. Post-moderation: content immediately goes live but is queued for review and any items that do not meet guidelines are removed. While suitable for sites with less traffic, it can be difficult when moderating large volumes of material and increases the risk of inappropriate content being missed.
  3. Reactive moderation: users are asked to ‘flag’ any offensive or questionable material for moderator review. While cost-effective and able to create a sense of site ownership among users, it relies on a proactive audience and can heighten the risk of inappropriate material going undetected.
  4. Distributed moderation: the site’s online community is invited to score or ‘vote’ on published content, with material that does not meet a certain standard removed. A risky strategy, this style should only be used by small organizations with a known and trustworthy user base.
  5. Automated moderation: digital tools detect and block inappropriate content before it goes live, along with IP addresses of users known to be abusive. It has obvious cost benefits but the absence of human analysis and interpretation can lead to worthy content being rejected and vice versa.

The power of AI

Content moderation is increasingly turning to technology to tackle the scourge of inappropriate user-generated content, with robotic process automation and artificial intelligence able to provide companies with cost and time-effective measures to filter unacceptable material.

This is particularly so in the pre-moderation stage where techniques such as ‘hash matching’ (where a fingerprint of an image is compared with a database of known harmful images) and ‘keyword filtering’ (in which words that indicate potentially harmful content are used to flag content) flags content for review by humans. With human moderators at significant risk of being exposed to harmful content, AI is also proving useful in limiting the type of material they need to view directly by prioritizing content for review based on perceived harm or level of uncertainty.

For all its benefits, experts have acknowledged AI is not a silver bullet and the human factor remains a vital component of any content moderation strategy, hence why many companies are choosing to blend the best of both worlds when tackling the complex issue.

Outsourced content moderators

As more businesses drown under the weight of user-generated content, many are turning to skilled content moderators in countries such as the Philippines. The outsourcing sector has a ready-made workforce of senior, intermediate and junior moderators who are highly trained in responding to user comments, identifying inappropriate material, and removing spam comments, offensive posts and language. Most importantly, the likes of social media content moderators have a vested interest in maintaining the reputation of their clients’ businesses and promoting strong public images of their brands.

Outsourced content moderators also have access to software programs that help them find and highlight harmful content. From Microsoft Azure and WebPurify to CrowdSource and Inversoft, they are able to work hand-in-hand with automated systems of moderation to deliver the best results for on-shore businesses. Better still, low living costs in outsourcing hubs such as the Philippines mean companies can employ moderators for up to 70% less than they would pay in their local employment markets.

While user-generated content cannot be allowed to go unchecked, the online space undoubtedly presents endless opportunities for savvy businesses. Having gained a better understanding of the world of content moderation, why not also understand how to build a top notch omnichannel experience for your customers

Sign up for the offshoring eCourse

12 in-depth and educational modules delivered via email – for free