Facebook whistleblower docs reveal it 'has known for YEARS' that it fails to stop hate speech and is unpopular among youth but 'lies to investors': Apple threatened to remove app over human trafficking and staff failed to see Jan 6 riot coming

  • The damning documents were supplied to the media by whistleblower Frances Haugen, a former FB manager
  • They comprise of internal research that shows how Facebook ignored staff concerns about its practices
  • Among complaints is that it ignored concerns about hate speech prevention and  human trafficking
  • Apple threatened to remove Facebook and Instagram from its app store over maid trafficking in Philippines
  • The documents also show how Facebook staff failed to see the January 6 riot coming
  • Its staff were monitoring individual accounts and groups but didn't piece together the wider movement
  • Haugen testified to Congress about her concerns; on Monday, she spoke before the British parliament
  • Facebook has denied any allegation of wrongdoing, specifically prioritizing profit over staff concerns 
  • The beleaguered company is considering a rebrand after a disastrous period of endless scandals  

Advertisement

A trove of documents from Facebook whistleblower Francis Haugen have claimed in detail how the beleaguered tech firm has ignored internal complaints from staff for years to put profits first, 'lie' to investors and shield CEO Mark Zuckerberg from public scrutiny.

The documents were reported on in depth on Monday morning as part of an agreement by a consortium of media organizations, as Haugen testified before British Parliament about her concerns. 

They are the latest and most devastating blow to the beleaguered company which has resisted calls to break up and disband for the last several years amid growing fear of its size and power. 

The documents show how staff complained to Facebook executives about the company's collective failure to anticipate the January 6 riot, how staff worried about the lack of policing on hate speech, and how the product was becoming less popular among young people. 

They reinforce previous complaints that CEO Mark Zuckerberg values size and power above all else, even as he struggles to retain control of the gargantuan network.

Facebook says the documents have been taken out of context and are part of a 'game of gotcha' by the media.   

As the documents emerged on Monday, Haugen told British lawmakers that she is 'extremely concerned' about how Facebook ranks content based on 'engagement', saying it fuels hate speech and extremism, particularly in non-English-speaking countries. 

Some of the most explosive claims in the papers include;   

  • Facebook staff have reported for years that they are concerned about the company's failure to police hate speech 
  • That Facebook executives knew it was becoming less popular among young people but shielded the numbers from investors 
  • That staff failed to anticipate the disastrous January 6 Capitol riot despite monitoring a range of individual, right-wing accounts
  • On an internal messaging board that day, staff said: 'We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control'
  • Apple threatened to remove the app from the App Store over how it failed to police the trafficking of maids in the Philippines 
  • Mark Zuckerberg's public comments about the company are often at odds with internal messaging 

Some of the most damning comments were posted on January 6, the day of the Capitol riot, when staff told Zuckerberg and other executives on an internal messaging board that they blamed themselves for the violence. 

The documents are among a cache of disclosures made to the US Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen (pictured)

The documents are among a cache of disclosures made to the US Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen, shown right, testifying in front of Congress on October 5

'One of the darkest days in the history of democracy and self-governance. History will not judge us kindly,' said one worker while another said: 'We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control'.

The mountains of crises the company has been buried with over the last few years has prompted some to demand that it rebrand and change its name. 

One of its most recent disasters was a tech-driven mistake that brought its entire network down for several hours around the world, costing businesses billions and putting it into stark perspective just how much the world relies on the company to communicate.  

Facebook has repeatedly resisted calls to break its products up and says it should be able to police itself.  

On Monday, tech experts said the revelations from the papers show Zuckerberg's relentless ambition.

''Ultimately, it rests with Mark and whatever his prerogative is - and it has always been to grow, to increase his power and his reach,' Jennifer Grygiel, a Syracuse University communications professor who's followed Facebook closely for years, said. 

FACEBOOK FAILED TO ANTICIPATE OR PREVENT CAPITOL RIOT BECAUSE OF 'PIECEMEAL APPROACH' 

An internal Facebook report following Jan. 6, previously reported by BuzzFeed, faulted the company for having a 'piecemeal' approach to the rapid growth of 'Stop the Steal' pages, related misinformation sources, and violent and inciteful comments.

 We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control'

Facebook says the situation is more nuanced and that it carefully calibrates its controls to react quickly to spikes in hateful and violent content, as it did on Jan 6.

The company said it´s not responsible for the actions of the rioters and that having stricter controls in place prior to that day wouldn´t have helped.

Facebook´s decisions to phase certain safety measures in or out took into account signals from the Facebook platform as well as information from law enforcement, said spokeswoman Dani Lever. 'When those signals changed, so did the measures.'

Lever said some of the measures stayed in place well into February and others remain active today.

Some employees were unhappy with Facebook's managing of problematic content even before the Jan. 6 riots.

 One employee who departed the company in 2020 left a long note charging that promising new tools, backed by strong research, were being constrained by Facebook for 'fears of public and policy stakeholder responses'.

Facebook says the documents have been taken out of context and are part of an 'orchestrated "gotcha" campaign'

Facebook says the documents have been taken out of context and are part of an 'orchestrated "gotcha" campaign'

'Haven´t we had enough time to figure out how to manage discourse without enabling violence?' one employee wrote on an internal message board at the height of the Jan. 6 turmoil.

'We've been fueling this fire for a long time and we shouldn´t be surprised it´s now out of control.'

What Facebook called 'Break the Glass' emergency measures put in place on Jan. 6 were essentially a toolkit of options designed to stem the spread of dangerous or violent content that the social network had first used in the run-up to the bitter 2020 election. As many as 22 of those measures were rolled back at some point after the election, according to an internal spreadsheet analyzing the company's response.

'As soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety,' Haugen said in an interview with '60 Minutes.'

Research conducted by Facebook well before the 2020 campaign left little doubt that its algorithm could pose a serious danger of spreading misinformation and potentially radicalizing users.

One 2019 study, entitled 'Carol´s Journey to QAnon-A Test User Study of Misinfo & Polarization Risks Encountered through Recommendation Systems,' described results of an experiment conducted with a test account established to reflect the views of a prototypical 'strong conservative' - but not extremist - 41-year North Carolina woman.

'MAKING HATE WORSE' : FLAWED AI LEADS PEOPLE INTO CONSPIRACY THEORIES AND EXTREMIST CONTENT  

One of the most urgent complaints is that Facebook drives hate with algorithms that direct people to content that they are most likely to engage with, often spurring extremism or hate speech. 

Haugen testified about it on Monday before the British Parliament, saying the company would rather hold on to profits greedily than sacrifice even 'a sliver' for the greater good. 

On example that highlights her concerns in the papers is a study of three accounts that Facebook did to test how people were exposed to content from the News Feed. 

Facebook whistleblower Frances Haugen testifying before British lawmakers on Monday about her concerns over the tech giant's power in the tech and telecomms space 

The document is titled 'Carol's Journey to QAnon'.

This test account, using the fake name Carol Smith, indicated a preference for mainstream news sources like Fox News, followed humor groups that mocked liberals, embraced Christianity and was a fan of Melania Trump.

Within a single day, page recommendations for this account generated by Facebook itself had evolved to a 'quite troubling, polarizing state,' the study found. By day 2, the algorithm was recommending more extremist content, including a QAnon-linked group, which the fake user didn´t join because she wasn't innately drawn to conspiracy theories. 

In India, engineers carried out the same experiment and were shown photos of dead bodies and extreme violence. 

Facebook had cracked down on politically-driven hate speech or content before the November election, but it stopped monitoring it as closely afterwards, even as staff complained.

The documents say the reason was that Zuckerberg did not want to interfere with content that was being widely shared or interacted with because that is the most valuable to investors and advertisers. 

In a 2020 memo, one staffer described feedback from Zuckerberg that he did not want to start cutting content- even if it contained misinformation - if there was a 'material trade-off' with engagement. 

Experts say it is a classic example of Facebook putting profits before moral responsibility. 

LANGUAGE GAPS MEAN FACEBOOK IS UNMONITORED IN LARGE PARTS OF THE WORLD

The failures to block hate speech in volatile regions such as Myanmar, the Middle East, Ethiopia and Vietnam could contribute to real-world violence, according to the documents. 

Zuckerberg 'personally decided company would agree to demands by Vietnamese government to increase censorship of 'anti-state' posts' 

Mark Zuckerberg personally agreed to requests from Vietnam's ruling Communist Party to censor anti-government dissidents, insiders say.

Facebook was threatened with being kicked out of the country, where it earns $1billion in revenue annually, if it did not agree.

Zuckerberg, seen as a champion of free speech in the West for steadfastly refusing to remove dangerous content, agreed to Hanoi's demands.

Ahead of the Communist party congress in January, the Vietnamese government was given effective control of the social media platform as activists were silenced online, sources claim.

'Anti-state' posts were removed as Facebook allowed for the crackdown on dissidents of the regime. 

Facebook told the Washington Post the decision was justified 'to ensure our services remain available for millions of people who rely on them every day'. 

Meanwhile in Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya's persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. 

But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation's many dialects they covered.

Despite Facebook's public promises and many internal reports on the problems, the rights group Global Witness said the company's recommendation algorithm continued to amplify army propaganda and other content that breaches the company's Myanmar policies following a military coup in February. 

Advertisement

In a review posted to Facebook's internal message board last year regarding ways the company identifies abuses, one employee reported 'significant gaps' in certain at-risk countries. 

Among the weaknesses cited were a lack of screening algorithms for languages used in some of the countries Facebook has deemed most 'at-risk' for potential real-world harm and violence stemming from abuses on its site. 

In 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar's Rohingya Muslim minority said Facebook was widely used to spread hate speech toward them. 

That prompted the company to increase its staffing in vulnerable countries, a former employee told Reuters. 

Facebook has said it should have done more to prevent the platform being used to incite offline violence in the country.

Ashraf Zeitoon, Facebook's former head of policy for the Middle East and North Africa, who left in 2017, said the company's approach to global growth has been 'colonial,' focused on monetization without safety measures.

More than 90 per cent of Facebook's monthly active users are outside the United States or Canada.

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. 

Machine-learning systems can detect such content with varying levels of accuracy. 

But languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook's automated content moderation, the documents provided to the government by Haugen show.  

In 2020, for example, the company did not have screening algorithms known as 'classifiers' to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic, a document showed.

These gaps can allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real-world harm is high.

In an undated document, which a person familiar with the disclosures said was from 2021, Facebook employees also shared examples of 'fear-mongering, anti-Muslim narratives' spread on the site in India, including calls to oust the large minority Muslim population there. 

'Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,' the document said.  

Internal posts and comments by employees this year also noted the lack of classifiers in the Urdu and Pashto languages to screen problematic content posted by users in Pakistan, Iran and Afghanistan.

Jones said Facebook added hate speech classifiers for Hindi in 2018 and Bengali in 2020, and classifiers for violence and incitement in Hindi and Bengali this year. She said Facebook also now has hate speech classifiers in Urdu but not Pashto.

Facebook's human review of posts, which is crucial for nuanced problems like hate speech, also has gaps across key languages, the documents show. 

An undated document laid out how its content moderation operation struggled with Arabic-language dialects of multiple 'at-risk' countries, leaving it constantly 'playing catch up.' 

The document acknowledged that, even within its Arabic-speaking reviewers, 'Yemeni, Libyan, Saudi Arabian (really all Gulf nations) are either missing or have very low representation.'

Facebook spokesperson Mavis Jones said in a statement that the company has native speakers worldwide reviewing content in more than 70 languages, as well as experts in humanitarian and human rights issues. 

She said these teams are working to stop abuse on Facebook's platform in places where there is a heightened risk of conflict and violence.

'We know these challenges are real and we are proud of the work we've done to date,' Jones said. 

'LYING TO INVESTORS' ABOUT POPULARITY AMONG TEENS AND TOXICITY TO YOUNG GIRLS

In March, a group of researchers produced a report for Chief Product Officer Chris Cox which revealed usage among teenagers in the US had slipped by 16 percent between 2020 and 2021 and that young adults (aged 18-29) were spending less than 5 percent of their time on the app.

It also found that people were joining Facebook later - in their mid to late 20s - rather than in their teens, as they did when it first came out. 

The report told Cox why:  Young adults engage with Facebook far less often than their older cohorts, seeing it as an 'outdated network' with 'irrelevant content' that provides limited value for them, according to a November 2020 internal document. 

It is 'boring, misleading and negative,' the report said. 

But Facebook didn't disclose that research to investors or the SEC, according to Haugen. 

She filed a complaint with the SEC, alleging that the company 'has misrepresented core metrics to investors and advertisers.' 

The above chart shows a number of trends highlighting Facebook's decrease in popularity among young users compared to older ones. One trend shows that the time spent on Facebook by U.S. teenagers was down 16% from 2020 to 2021 and young adults, between 18 and 29, were spending 5% less time on the app

The above chart shows a number of trends highlighting Facebook's decrease in popularity among young users compared to older ones. One trend shows that the time spent on Facebook by U.S. teenagers was down 16% from 2020 to 2021 and young adults, between 18 and 29, were spending 5% less time on the app

The above chart, created in 2017, reveals that Facebook researchers knew for at least four years that the social network was losing steam among young people

The above chart, created in 2017, reveals that Facebook researchers knew for at least four years that the social network was losing steam among young people

The tech giant could be in violation of SEC rules as advertisers were allegedly duped by the lack of disclosure about Facebook's influence on teens. The above chart shows the decline in teen engagement since 2012/2013

The tech giant could be in violation of SEC rules as advertisers were allegedly duped by the lack of disclosure about Facebook's influence on teens. The above chart shows the decline in teen engagement since 2012/2013

This is some of the research Facebook was shown last March about how Instagram is harming young people

This is some of the research Facebook was shown last March about how Instagram is harming young people 

It was previously revealed in the papers that the company was warned of the negative effects Instagram was having on young people's mental health, but did nothing about it. 

One message posted on an internal message board in March 2020 said the app revealed that 32 percent of girls said Instagram made them feel worse about their bodies if they were already having insecurities.   

Another slide, from a 2019 presentation, said: 'We make body image issues worse for one in three teen girls.  

'Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.' 

Another presentation found that among teens who felt suicidal, 13% of British users and 6% of American users traced their suicidal feelings to Instagram.  

The research not only reaffirms what has been publicly acknowledged for years - that Instagram can harm a person's body image, especially if that person is young - but it confirms that Facebook management knew as much and was actively researching it. 

APPLE THREATENED TO REMOVE FACEBOOK AND INSTAGRAM OVER FILIPINA MAID TRAFFICKING IN MIDDLE EAST

The dramatic threat was revealed in internal documents disclosed by Facebook whistleblower Frances Haugen, and detail the misery both Facebook and its sister site Instagram are used to inflict on vulnerable women hired as live-in help.

The domestic workers are concentrated in Saudi Arabia, Egypt and Kuwait and mostly come from poorer countries in South Asia - mainly the Philippines - and Africa, according to reports and internal Facebook documents.

Apple backed down after Facebook disabled about 1,000 accounts advertising the women, often alongside videos and written biographies. Facebook knew about the issue a year before Apple's threat and even had a codename for it: 'HEx,' or human exploitation.

Maids have complained of being beaten, abused and having their passports confiscated while working in oil-rich Middle Eastern nations. In 2018, an abused Filipina was found dead in a refrigerator, sparking a Filipino government ban on prospective housekeepers from traveling there for work.

Apple said the proposed ban was over posts that traded and sold poor maids in the Middle East, which a UN official likened to an 'online slave market' in a 2019 BBC article. The above photo was included in the BBC article

Apple said the proposed ban was over posts that traded and sold poor maids in the Middle East, which a UN official likened to an 'online slave market' in a 2019 BBC article. The above photo was included in the BBC article

Money sent back home by Filipino maids living abroad comprises around 10% of the Philippines GDP, and that ban has since been removed.

The social media giant said being taken off the App Store would've had 'potentially severe consequences to the business' in a 2019 analysis. 

Three-quarters of posts selling maids were on Instagram, which is owned by Facebook, while Facebook itself was primarily used to link to outside websites, the company found.

Instagram workers who accessed maids' inboxes also unearthed troves of worrying messages, with some fearful of physical and sexual abuse, and others complaining of being locked in the home where they were working, and having their passports removed.

Facebook acknowledged that it was 'under-enforcing on confirmed abusive activity' that saw Filipina maids complaining on the social media site of being beaten and having their passports stolen.

But Facebook's crackdown seems to have had a limited effect.

Even today, a quick search for 'khadima,' or 'maids' in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images.

That's even as the Philippines government has a team of workers whose sole task is to scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters using the site.

While the Mideast remains a crucial source of work for women in Asia and Africa hoping to provide for their families back home, Facebook acknowledged some countries across the region have 'especially egregious' human rights issues when it comes to laborers' protection.

'In our investigation, domestic workers frequently complained to their recruitment agencies of being locked in their homes, starved, forced to extend their contracts indefinitely, unpaid, and repeatedly sold to other employers without their consent,' one Facebook document read. 'In response, agencies commonly told them to be more agreeable.'

The report added: 'We also found recruitment agencies dismissing more serious crimes, such as physical or sexual assault, rather than helping domestic workers.'  

The comments below have not been moderated.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

We are no longer accepting comments on this article.