Tech giants beef up body to fight extremist content
Their consortium will be focused on preventing, responding to, and learning from “extremist” and “terrorist” content on digital platformsEuropost
Social media companies led by Facebook said Monday they were ramping up an industry body that aims to weed out extremist content, seeking to put procedures in place globally on how to handle crises. Facebook announced the additional efforts at the United Nations during a meeting with New Zealand's Prime Minister Jacinda Ardern, who has taken up the cause of fighting online extremism after a March massacre by a white supremacist at two mosques in Christchurch.
"We are trying to create a civil defense-style mechanism. The same way we respond to natural emergencies like fires and floods, we need to be prepared and ready to respond to a crisis like the one we experienced," she told reporters.
Facebook, Twitter, Microsoft and YouTube in 2017 formed the Global Internet Forum to Counter Terrorism (GIFCT), a vaguely conceived alliance tasked with tackling the most dangerous material on social media. Now, under Monday's announcement, the Global Internet Forum to Counter Terrorism will be considered an independent body and enjoy a dedicated staff under an executive director. Amazon, LinkedIn, and WhatsApp will also be joining the forum.
In addition, while the industry will lead the forum's operating board, non-governmental groups will head an advisory board. The governments of the United States, France, Britain, Canada, New Zealand and Japan will also play an advisory role, along with UN and European Union experts.
Facebook said that the forum would fund research on how best to prevent incitements to violence online and how to reduce the effects on social media when attacks occur. The forum will still amount to a voluntary effort by tech companies to police themselves.
Government regulation is anathema for major US tech companies and their libertarian-minded philosophy, although a growing number of countries outside the West have sought to force social media platforms to censor unwanted content. Ardern indicated she had no intention of seeking new regulations, which she said made little sense when pursuing ideas such as steering social media users away from extremist material to alternative, curated content.
"If we want the greatest gains, we actually need to collaborate," she said.
"There is nothing we had seen, even at this point several months on, that has ever suggested to me that any of these tech companies had an interest in providing a platform for hatred and violence," she said.
News comes shortly after last week, digital rights group the Electronic Frontier Foundation (EFF) warned that the frantic push to crush “extremist” speech online will hurt innocent users the most. The EFF argued that because tech giants have consistently failed to explain how they define “extremist” or “terrorist” content, their approach creates the unintended consequence of censoring innocent users such as those documenting human rights abuses.
The EFF’s warning about the lack of clarity when big tech companies attempt to moderate “extremist” or “terrorist” content is one of many recent concerns that have been raised about how the companies involved in GIFCT moderate content on their platforms.
Many of these concerns revolve around Facebook and the way bias and subjectivity often seep into its content moderation decisions. For example, Facebook recently admitted in court filings that it subjectively labels users as “dangerous” and then uses this opinion to ban users and their content from its platforms. Last week, Facebook CEO Mark Zuckerberg also admitted that there “clearly was bias” in a recent high profile “fact-check” on the platform.