Why Isn't Susan Wojcicki Getting Grilled By Congress?

YouTube is a major vector for election and other disinformation. But its CEO isn't with Mark Zuckerberg and Jack Dorsey on Capitol Hill today. 
Susan Wojcicki
 While Facebook and Twitter CEOs Mark Zuckerberg and Jack Dorsey have been repeatedly hauled before Congress in the past few years, YouTube CEO Susan Wojcicki has escaped summons.Photograph: Patrick T. Fallon/Bloomberg/Getty Images

There are many important questions that could be asked at the Senate Judiciary Committee hearing with tech CEOs today regarding their handling of the US 2020 election. Foremost among them should be “Where is Susan Wojcicki, YouTube’s CEO?” The election was billed as a major test for social media platforms, but it’s one that YouTube failed weeks before election day. The platform is playing host to, and is an important vector for, spreading false claims of election victory and attempts to delegitimize Biden’s win. YouTube had to have seen it all coming, and it shrugged. That’s YouTube’s fault—but it’s also a result of the success of its broader strategy to keep its head down and let other platforms be the face of the content moderation wars. In general, the media, researchers, and lawmakers have let this strategy work.

YouTube is one of the biggest social media platforms in the country—indeed, a Pew Research Center survey last year found it was the most widely used. Over a quarter of Americans get news from the platform. There have been millions of views of videos with false claims of election results or voter fraud on YouTube since election day, with nothing more than a small, uniform label about election results attached beneath them. And yet, the platform often escapes scrutiny. Judging by much of the press coverage and public outrage about the role that social media platforms play in the modern information ecosystem, one could be forgiven for thinking that Facebook and Twitter were the only major sources of online information. This disproportionate focus goes beyond public narratives: While Facebook and Twitter CEOs Mark Zuckerberg and Jack Dorsey have been repeatedly hauled before Congress in the past few years, YouTube CEO Susan Wojcicki has escaped summons. (Sundar Pichai, CEO of YouTube’s parent company, Google, has appeared, but there’s enough to scrutinize about Google that its subsidiary rarely gets much attention.)

YouTube’s general public relations and governance policy over the years has been to be more opaque, keep its head down, keep quiet, and let the other platforms take the heat. It’s generally gotten away with it. In the aftermath of the election, there has been a wave of stories about how YouTube has failed to rise to the challenge that the moment posed for internet platforms. But this failure was foreseeable weeks ago, when YouTube refused to adequately prepare for what was coming, instead continuing its general strategy of just not drawing attention to itself.

This failure was foreseeable the week before the election, when YouTube told The New York Times that it expected “most decisions to keep or remove videos will be clear and that the usual processes for making those decisions will be sufficient”—appearing to be the only people in the country that thought this was just a very normal election generally, and for content moderation in particular.

It was foreseeable seven weeks ago, when researchers started to raise alarm at the fact that YouTube did not have policies in place to deal with the predictable outcome that its platform would be used to spread false claims of election victory and to delegitimize the results of the election.

It was foreseeable on May 20 when—weeks after the other major platforms and months after the start of the pandemic—YouTube finally released a specific Covid-19 Medical Misinformation Policy so that users and the public knew what content related to the pandemic it was considering harmful and subject to removal. (YouTube did, in March, make an announcement about the coronavirus in which they said they were working to prevent misinformation and would continue to quickly remove videos that violated their existing policies on harmful content.)

It was foreseeable 17 months ago, when the platform found itself involved in a rare, high-profile controversy sparked by a Vox journalist’s complaints that he had been the subject of an extended harassment campaign by a high-profile right-wing commentator on its platform. YouTube responded with conclusory and inexplicably fluctuating statements.

If there is a singular moment that defines YouTube’s intentional opacity and the lack of accountability this facilitates, perhaps it was in 2018, when Google (and therefore YouTube) provided the most limited data set of the three companies to the independent researchers tasked by the Senate Select Committee on Intelligence with preparing reports analyzing the nature and extent of Russian interference in the 2016 US election. Our collective lack of insight into what is happening on the platform in the four year since has been an ongoing echo of that moment.

And yet, by and large, YouTube’s game plan of giving less to scrutinize has worked. Why?

In part, the problem is practical and technical. It is much harder—and more time consuming—to search and analyze audio and video content than it is text. In part, it is an audience problem: The people that write and research platforms tend to live on Twitter (and, to a lesser extent, Facebook). Perhaps the problem is also a product of unconscious bias, with academics and journalists over-indexing on the importance of the written word. It’s certainly a generational problem: Users of YouTube, and other platforms that focus on video content like TikTok or Twitch, also tend to be younger. Fundamentally, it’s also a storytelling problem: It’s simply harder to write a captivating story about a platform’s failure to take action or release a policy than it is to write about a platform that releases one. That is, until the results of failing to have a policy become all too clear, as they have for YouTube since Election Day. I am guilty of all these biases, and they are evident in my work too. But to solve the challenges posed by content moderation and its governance, the focus must extend beyond the problems that are easier to write about. Opacity should not be so rewarded.

The YouTube problem is not just a problem with YouTube. It’s also indicative of a broader truth: In general, researchers, lawmakers, and journalists focus on the problems that are most visible and tractable, even if they are not necessarily the only important ones. As more content moves from the biggest “mainstream” platforms to smaller ones—perhaps precisely because they have more lax content moderation standards—this will be an increasingly common challenge. Likewise, as platforms and users create more “private” or “disappearing” content, it will be harder to track. This does not mean social media will not still have all the usual problems—hate speech, disinformation, misinformation, incitement to violence—that always exist where people create content online.

This is not a call for a swath of new policies banning any and all false political content (whatever that would mean). In general, I favor intermediate measures like aggressive labelling, de-amplification, and increased friction for users sharing it further. But most of all, I favor platforms taking responsibility for the role they play in our information ecosystem, thinking ahead, being transparent, explaining their content moderation choices, and showing how they have been enforced. Clear policies, announced in advance, are an important part of platform governance: Content moderation must not only be done, but it must be seen to be legitimate and understood.

YouTube did append a small label to videos about election results stating that “The AP has called the Presidential race for Joe Biden.” Whether or not this is adequate, the lack of transparency and specifics in advance about its plans (compared to the other major platforms) is inexplicable. (The most detail YouTube provided was an announcement at the end of October saying that they would add an information panel to election-related searches and videos with a note that election results may not be final and a link to Google's information on election results.) This ad hoc approach creates the opening for speculation that its actions are influenced by political outcomes, rather than objective criteria it laid out beforehand. YouTube’s role in modern public discourse is important enough that it needs to do better than complacent reassurances that “our systems are generally working as intended.”

About a week after the 2016 election, Mark Zuckerberg famously declared that the idea that fake news on Facebook influenced the election was “crazy.” In the run up to 2020, by contrast, Facebook was engaged in a sweeping public relations campaign to convince people of how very seriously it was taking its preparations for the 2020 election and the protection of election integrity. It’s possible that 2020 might be YouTube’s 2016, when it is finally forced to provide transparency and accountability for its role in the modern information ecosystem. There’s an obvious place to start: Wojcicki should be called to the Hill next time there is a hearing with tech CEOs. There is no question Jack Dorsey or Mark Zuckerberg should have to answer today that she shouldn’t.

Updated 11-20-2020, 6pm EST: This story was updated to clarify that YouTube made an announcement in March regarding the coronavirus, and one in October regarding election day plans.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories