YouTubeCombatMisinformationOver the past five years, YouTube has invested heavily in a framework they call the 4Rs of Responsibility as a core tenet in their commitment to protect their community. Using a combination of machine learning and people, their focus is removing violative content quickly, raise up authoritative sources, and reduce the spread of problematic content. These tools working together have been pivotal in keeping views of bad content low, while preserving free expression on the YouTube platform. And yet, as misinformation narratives emerge faster and spread more widely than ever, the company’s approach needs to evolve to keep pace, according to their press release.

Right now, YouTube is focusing on the following three challenges:

  1. Catching new misinformation before it goes viral: For quite some time, the misinformation landscape online was dominated by a few main narratives, but the YouTube team was able to train their machine learning systems to reduce recommendations of those videos and other similar ones based on patterns in that type of content. However, not every fast-moving narrative in the future will have expert guidance that can inform their policies.

    To address this, YouTube is continuously training its system on new data and tries to leverage an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives their main classifier doesn’t catch. Over time, this will make YouTube faster and more accurate at catching these viral misinformation narratives.

  2. Addressing cross-platform sharing of misinformation: Another challenge is the spread of borderline videos outside of YouTube – these are videos that don’t quite cross the line of their policies for removal, but that they don’t necessarily want to recommend to people. Even if they aren’t recommending a certain borderline video, it may still get views through other websites that link to or embed a YouTube video. One possible way to address this is to disable the share button or break the link on videos that they’re already limiting in recommendations. They will also surface an interstitial that appears before a viewer can watch a borderline embedded or linked video, letting the viewer know the content may contain misinformation. YouTube will continue to carefully explore different options to make sure they limit the spread of harmful misinformation across the internet.

  3. Ramping up misinformation efforts work around the world: Beyond growing their teams with even more people who understand the native languages and regional nuances entwined with misinformation, YouTube is exploring further investments in partnerships with experts and non-governmental organizations around the world. Similar to their approach with new viral topics, they are working on ways to update models more often to catch hyperlocal misinformation, with capability to support local languages.


Since 2018, YouTube implemented forty-eight updates to its enforcement guidelines or policies and emphasized in its latest blog post that it will continue to build on its work to reduce harmful misinformation across all its products and policies while allowing a diverse range of voices to thrive.

YouTubeCombatMisinformation1

“We recognize that we may not have all the answers, but we think it’s important to share the questions and issues we’re thinking through. There has never been a more urgent time to advance our work for the safety and wellbeing of our community,” the YouTube team concludes.

By MediaBUZZ