“We’ll proceed to face scrutiny — a few of it honest and a few of it unfair,” he mentioned within the memo. “However we also needs to proceed to carry our heads up excessive.”

Right here is Mr. Clegg’s memo in full:


You should have seen the collection of articles about us revealed within the Wall Road Journal in current days, and the general public curiosity it has provoked. This Sunday evening, the ex-employee who leaked inside firm materials to the Journal will seem in a section on 60 Minutes on CBS. We perceive the piece is more likely to assert that we contribute to polarization in the US, and counsel that the extraordinary steps we took for the 2020 elections have been relaxed too quickly and contributed to the horrific occasions of January sixth within the Capitol.

I do know a few of you – particularly these of you within the US – are going to get questions from family and friends about these items so I wished to take a second as we head into the weekend to offer what I hope is a few helpful context on our work in these essential areas.

Fb and Polarization

Persons are understandably anxious in regards to the divisions in society and in search of solutions and methods to repair the issues. Social media has had a big effect on society lately, and Fb is commonly a spot the place a lot of this debate performs out. So it’s pure for folks to ask whether or not it’s a part of the issue. However the concept Fb is the chief reason for polarization isn’t supported by the info – as Chris and Pratiti set out of their notice on the difficulty earlier this yr.

The rise of polarization has been the topic of swathes of significant tutorial analysis lately. In reality, there isn’t an excessive amount of consensus. However what proof there’s merely doesn’t help the concept Fb, or social media extra typically, is the first reason for polarization.

The rise in political polarization within the US pre-dates social media by a number of many years. If it have been true that Fb is the chief reason for polarization, we might count on to see it going up wherever Fb is standard. It isn’t. The truth is, polarization has gone down in a lot of nations with excessive social media use on the similar time that it has risen within the US.

Particularly, we count on the reporting to counsel {that a} change to Fb’s Information Feed rating algorithm was chargeable for elevating polarizing content material on the platform. In January 2018, we made rating modifications to advertise Significant Social Interactions (MSI) – so that you’d see extra content material from buddies, household and teams you’re a part of in your Information Feed. This transformation was closely pushed by inside and exterior analysis that confirmed that significant engagement with family and friends on our platform was higher for folks’s wellbeing, and we additional refined and improved it over time as we do with all rating metrics. After all, everybody has a rogue uncle or an old-fashioned classmate who holds sturdy or excessive views we disagree with – that’s life – and the change meant you usually tend to come throughout their posts too. Even so, we’ve developed industry-leading instruments to take away hateful content material and cut back the distribution of problematic content material. Consequently, the prevalence of hate speech on our platform is now right down to about 0.05%.

However the easy reality stays that modifications to algorithmic rating methods on one social media platform can’t clarify wider societal polarization. Certainly, polarizing content material and misinformation are additionally current on platforms that don’t have any algorithmic rating in any respect, together with personal messaging apps like iMessage and WhatsApp.

Elections and Democracy

There’s maybe no different subject that we’ve been extra vocal about as an organization than on our work to dramatically change the best way we method elections. Beginning in 2017, we started constructing new defenses, bringing in new experience, and strengthening our insurance policies to forestall interference. At the moment, we’ve got greater than 40,000 folks throughout the corporate engaged on security and safety.

Since 2017, we’ve got disrupted and eliminated greater than 150 covert affect operations, together with forward of main democratic elections. In 2020 alone, we eliminated greater than 5 billion faux accounts — figuring out virtually all of them earlier than anybody flagged them to us. And, from March to Election Day, we eliminated greater than 265,000 items of Fb and Instagram content material within the US for violating our voter interference insurance policies.

Given the extraordinary circumstances of holding a contentious election in a pandemic, we carried out so referred to as “break glass” measures – and spoke publicly about them – earlier than and after Election Day to answer particular and strange indicators we have been seeing on our platform and to maintain probably violating content material from spreading earlier than our content material reviewers may assess it in opposition to our insurance policies.

These measures weren’t with out trade-offs – they’re blunt devices designed to take care of particular disaster situations. It’s like shutting down a complete city’s roads and highways in response to a short lived menace which may be lurking someplace in a selected neighborhood. In implementing them, we all know we impacted important quantities of content material that didn’t violate our guidelines to prioritize folks’s security throughout a interval of maximum uncertainty. For instance, we restricted the distribution of stay movies that our methods predicted could relate to the election. That was an excessive step that helped stop probably violating content material from going viral, but it surely additionally impacted lots of totally regular and cheap content material, together with some that had nothing to do with the election. We wouldn’t take this sort of crude, catch-all measure in regular circumstances, however these weren’t regular circumstances.

We solely rolled again these emergency measures – primarily based on cautious data-driven evaluation – after we noticed a return to extra regular situations. We left a few of them on for an extended time period by February this yr and others, like not recommending civic, political or new Teams, we’ve got determined to retain completely.

Preventing Hate Teams and different Harmful Organizations

I need to be completely clear: we work to restrict, not increase hate speech, and we’ve got clear insurance policies prohibiting content material that incites violence. We don’t revenue from polarization, actually, simply the other. We don’t permit harmful organizations, together with militarized social actions or violence-inducing conspiracy networks, to prepare on our platforms. And we take away content material that praises or helps hate teams, terrorist organizations and felony teams.

We’ve been extra aggressive than another web firm in combating dangerous content material, together with content material that sought to delegitimize the election. However our work to crack down on these hate teams was years within the making. We took down tens of hundreds of QAnon pages, teams and accounts from our apps, eliminated the unique #StopTheSteal Group, and eliminated references to Cease the Steal within the run as much as the inauguration. In 2020 alone, we eliminated greater than 30 million items of content material violating our insurance policies concerning terrorism and greater than 19 million items of content material violating our insurance policies round organized hate in 2020. We designated the Proud Boys as a hate group in 2018 and we proceed to take away reward, help, and illustration of them. Between August final yr and January 12 this yr, we recognized practically 900 militia organizations beneath our Harmful Organizations and People coverage and eliminated hundreds of Pages, teams, occasions, Fb profiles and Instagram accounts related to these teams.

This work won’t ever be full. There’ll all the time be new threats and new issues to handle, within the US and all over the world. That’s why we stay vigilant and alert – and can all the time must.

That can be why the suggestion that’s typically made that the violent riot on January 6 wouldn’t have occurred if it was not for social media is so deceptive. To be clear, the accountability for these occasions rests squarely with the perpetrators of the violence, and people in politics and elsewhere who actively inspired them. Mature democracies during which social media use is widespread maintain elections on a regular basis – as an illustration Germany’s election final week – with out the disfiguring presence of violence. We actively share with Regulation Enforcement materials that we are able to discover on our providers associated to those traumatic occasions. However lowering the advanced causes for polarization in America – or the riot particularly – to a technological rationalization is woefully simplistic.

We’ll proceed to face scrutiny – a few of it honest and a few of it unfair. We’ll proceed to be requested troublesome questions. And many individuals will proceed to be skeptical of our motives. That’s what comes with being a part of an organization that has a big impression on the planet. We must be humble sufficient to just accept criticism when it’s honest, and to make modifications the place they’re justified. We aren’t excellent and we don’t have all of the solutions. That’s why we do the form of analysis that has been the topic of those tales within the first place. And we’ll hold in search of methods to answer the suggestions we hear from our customers, together with testing methods to ensure political content material doesn’t take over their Information Feeds.

However we also needs to proceed to carry our heads up excessive. You and your groups do unbelievable work. Our instruments and merchandise have a vastly constructive impression on the world and in folks’s lives. And you’ve got each purpose to be happy with that work.