2025-04-27 05:48:22
YouTube has spent years operating like many other tech platforms: Hear no evil, see no evil. That's no longer tenable.
The Google-owned online video giant in recent weeks has been thrust to the front of the broader discussion about the role tech platforms have in regulating what is put onto their networks—and who it's reaching.
Facebook, Twitter, and others have faced similar challenges, but none appear as daunting as YouTube's. The platform relies on its automated system to show ads against whatever videos are uploaded—and it has no good answers for the many questions that are starting to be asked of its ability to effectively moderate that system.
SEE ALSO: YouTuber's advent calendar for tweens was terrible and her excuse was even worseWhat is clear, however, is that YouTube has evolved. A YouTube spokesperson said the company's community guidelines have always been evolving. With recent stories detailing the varied types of questionable content, guidelines seem to be changing on a near-daily basis.
Some of the most impactful stuff has been around content targeted at children, even in its supposed safe space of YouTube Kids. Mashablereporter Brian Koerber delved deep into the world of disturbing videos featuring children's characters—such as a video where Elsa, Spider-Man, and other cartoon characters shoot people with automatic weapons. The Times of London reported on big brand advertising alongside videos showing barely dressed children and unmoderated comments that were sexual in nature.
Even more recently, BuzzFeedillustrated that the problem appears to have extended to YouTube's autofill feature. When typing "how to have" the search tool returned searches such as "how to have s*x with your kids." This happened due to a troll campaign to game its system.
Then there's the fake news. YouTube's role in hosting and spreading misinformation and propaganda received less attention than Facebook and Twitter, but the platform was certainly on that level. YouTube plays a central role in far right media, as shown by an analysis from Jonathan Albright, research director of the Tow Centerfor Digital Journalism. Albright also found plenty of auto-generated fake news on YouTube. YouTube has begun to take steps to address this, including removing Russia's RT from its preferred advertising program.
RT had been widely considered to be arms of the Russian government and tasked with pushing the country's perspective. YouTube had been shown to have helped build up RT's YouTube channel into one of the biggest news-related operations on the platform.
And there's also the extremists. YouTube has been criticized for years over its willingness to host videos of extremist Muslim clerics attempting to recruit people to jihadist movements. Then, something changed. YouTube removed tens of thousands of videos of Anwar al-Awlaki, a well-known, English-speaking cleric who was killed in a drone strike six years ago.
This Tweet is currently unavailable. It might be loading or has been removed.
The through line here is that none of these issues are terribly new. YouTube has been dealing with problematic content on its platform since just about its first days. Just what the company is supposed to do—and what it's capable of doing—have been up for debate for just as long. Sometimes it's proactive, such as when it built Content ID to address the many hours of movies and TV content that violated copyright protections. Even that hasn't been without issues, though, as video creators have taken an issue with how the system doesn't tend to take Fair Use into account.
But more generally, YouTube's stance has been similar to other platforms including Facebook and Twitter—that it's role was not to police what was uploaded to its platform unless it was so clearly over the line as to be illegal. Even then, YouTube is protected from punishment over illegal content by the Communications Decision Act, which just about excuses anything added by users to digital platforms that don't review the content prior to publication.
That meant plenty of questionable content—and communities that formed around it. John Herrman at the New York Timeshas covered this extensively, in particular how elements of the alt-right have coalesced on YouTube. Meanwhile, YouTube was busy incubating its growing crop of "creators," usually lifestyle, gaming, or entertainment-focused personalities who made the platform a destination for youngsters.
Whatever YouTube's awareness of some of the extremely questionable content on its platform, it seems clear that the company did not seriously consider addressing it. As YouTube, Facebook, Twitter, Reddit, and a variety of other platforms turned into global phenomenons, the narrative emerged that having a certain amount of particularly gross stuff on your platform was table stakes. Most every platform embraced some form of free speech advocacy.
Things started to shift about two years ago, and surprisingly enough, it was Reddit that first showed signs of change. Once a bastion of "anything goes" moderation, Reddit was home to some of the vilest stuff on the internet. Efforts to section it off weren't working. So one day, Reddit changed. It banned five subreddits—and has banned plenty more since.
Many other platforms have followed suit in their own way. Twitter has had a similar evolution, most recently putting new rules in place that allow it to ban users for affiliating with hate groups. Facebook has been slower, though it's been instituting some new rules to prohibit use of its advertising system to spread propaganda.
YouTube, however, was the slowest—and it's easy to understand why. YouTube's business is to put ads alongside videos. More videos will generally mean more ads and more money. Putting serious limitations on what can be uploaded or what can be monetized with ads is counter to the platform's business. It's also difficult to do, since moderating all of YouTube's videos is extremely challenging.
This Tweet is currently unavailable. It might be loading or has been removed.
YouTube is banking hard on artificial intelligence (in particular machine learning) to help solve this problem. The company's efforts to find and remove extremist videos has reportedly seen some success, providing a way to find content quickly and get it in front of human moderators to a final decision, a YouTube spokesperson said.
Those systems, however, require a lot of training and remain a ways from being able to parse why a video of a children's character killing people would be disturbing, leaving the company to rely on users flagging videos and human moderators. Meanwhile, the automated rules it has put in place routinely frustrate legitimate video creators who find themselves unable to monetize their videos because they accidentally ran afoul of YouTube's system.
This all leaves YouTube in a tough spot, maybe even tougher than Facebook, Twitter, Reddit, and the rest. The threat that YouTube faces is so core to its business that it's not readily apparent how the platform can adequately deal with these issues without alienating a large swath of the people who feed it video.
This is one main reason that YouTube has been pushing into original content and a TV service—there is no good fix for this existential problem. YouTube's current business necessitates that it host as much content as possible while playing whack-a-mole with the bad stuff that stokes the public's ire. TV bundles and original digital content it has a hand in don't have that problem, but also are monetized for YouTube through subscriptions rather than ads. (YouTube doesn't run ads on YouTube Red on its TV bundle, but expect to see commercials on the other channels just like cable).
This takes out the You and leaves the Tube. And maybe that's where this company should go.
Editor's note: This piece has been updated to clarify statements from a YouTube spokesperson.