By Imanol Ramírez*
This is a summary from the full article, available here.
I. Introduction
There is increasing public pressure on internet companies to aggressively intervene with content moderation, particularly to tackle disinformation, harmful speech, copyright infringement, sexual abuse, automation and bias, terrorism and violent extremism.[1] Events like the Russian meddling in the 2016 United States’ presidential election,[2] the genocide of Rohingya Muslims in Myanmar in 2017,[3] and the live footage of the Christchurch terrorist attack in 2019,[4] among others, have fueled the push for more effective online content moderation across the world.
Governments are now requiring large digital platforms, such as Facebook, Twitter and YouTube, to act further and faster with demands for increased use of technology in combating perceived harms.[5] The massive “infodemic” arising from the Covid-19 pandemic[6] and possible interference with the 2020 United States presidential election have increased the questions over the accountability of large tech platforms. This has motivated public actors to become more willing to interfere with the moderation policies of internet companies.[7]
While there are undeniably legitimate policy objectives for state intervention in online content moderation, there is the risk of an adverse effect on competition. Government regulation could serve to benefit large incumbents by raising the industries’ cost of doing business, with these companies being better positioned to bear the costs. Legislators and policymakers need to be aware of the impact that increased regulation on private content moderation policies could have on competition, particularly with digital platform markets characterized by the existence of dominant firms and natural forms of market concentration. In the long run, stringent and divergent regulation across jurisdictions could help large incumbent companies cement their market position to the detriment of consumer welfare.
II. Content Regulation
The Internet brought a substantial improvement in global connectivity, with increased access to information and forums where ideas can be freely expressed and content can be published. Nonetheless, this connectivity also came with easier ways to access and spread illegal and harmful content.[8] Online actions are contributing to violence, influencing elections, spreading hateful ideologies, and threatening people’s health and lives.[9] Quick and aggressive responses to control the total volume of communications online are therefore necessary.
Large online platforms have engaged in efforts to reduce online abuse by developing advanced tools to moderate content. They have developed technology that automatically analyzes and removes content that violates their policies. These tools are complemented with trained internal teams and crowd workers, who review flagged content when a more nuanced understanding of a post is necessary to make a decision. They have also made alliances with independent fact-checking organizations to help combat disinformation. Facebook even created an adjudicative body to hear some of the users’ appeals to its moderation decisions.
On the other side of the spectrum, however, there is an enormous amount of literature and public sentiment that believes private digital platforms are removing content based on incoherent and obscure guidelines, and thus over censoring speech. Just as there is a critical need of moderation, the danger of censorship is amplified online because online intermediaries control a vast share of communications while maintaining the power to mediate these communications.[10] As societies rely more and more on the Internet, companies who control what can be said or published online are increasingly questioned about the decisions they make, and how accountable they should be for what is said.
This tension between freedom of speech and reduced online harm is the central conflict and quandary of online content moderation. Unfortunately, there is no perfect answer, as people’s expectations and views about free expression vary enormously across the spectrum of internet users, tech company leadership, cultures and societies as a whole.
III. Competition Barriers
Competition barriers are impediments that make market entry and expansion more difficult for firms.[11] High competition barriers can retard, diminish or entirely prevent the attraction, entry and expansion of rivals, which is the market based mechanism for checking market power.[12] Barriers to competition can arise from the market structure or industry conditions, such as costs and demand, and the behavior of incumbent firms, like entering into exclusive dealing agreements.[13]
Laws and regulations can also create significant competition barriers. Compliance with rules may substantially increase the cost of participating in the market.[14] Some scholars have consider regulatory or legal restrictions as some of the most substantial barriers to competition.[15] Incumbents firms may even lobby governments to create legal and regulatory barriers to protect their businesses.[16]
The pressure for effective removal of illegal and harmful content online has already materialized into law in several jurisdictions. However, these regulations are creating a stringent and divergent legal environment with increased liability for firms operating in the digital landscape, including the mandatory use of technology, establishment of substantial fines, and even criminal responsibility. The immediate consequence is increasing the cost of operating in the market, as companies must deploy the technology and resources to comply with the legal requirements and avoid liability.
To some extent, major platforms share their moderation tools with smaller companies and cooperate in industry-wide efforts to tackle illegal and harmful content. However, not all technology can be easily shared or made commercially available. Companies face the risk of being gamed by bad actors who use the information and technology to circumvent the enforcement of moderation policies. Hence, companies do not publicly reveal the details of how exactly their automated regulation systems work, so that they are not bypassed.[17] Furthermore, content analysis tools cannot be applied with the same reliability across different contexts since they perform best when trained to apply in a single domain. Language varies considerably across platforms, demographic groups and conversation topics, creating the need for a focus on a single domain.[18] There is also a substantial learning curve for effective content moderation, as technology companies learn by trying, failing and reiterating.[19]
It is important to consider that government interventions imposing increased liability on companies’ content moderation decisions could significantly raise entry or expansion barriers. Alternatively, regulation based on immunities and incentives is more protective of innovative industries, which was a factor taken into consideration by the United States Congress when passing the Communications Decency Act, and it has been vital to the expansion of the Internet and online intermediaries.[20] Under this market-based approach, intense competition should lead to more effective moderation practices since companies will strive for consumers demanding higher quality.
Nonetheless, as digital markets increasingly mature, the combination of strong network effects, increasing returns to scale and scope, and the incumbency advantage arising from data could lead in some cases to unique settings with digital platform markets prone to tipping.[21] When markets tip, they create winner-takes-most or winner-takes-all environments, giving place to oligopolistic or monopolistic market structures.[22] The consequence is the creation of dominant firms and natural forms of market concentration. In these cases, it is unlikely that markets will self-correct rapidly and thus mere reliance in market forces and competition policy to achieve effective content moderation may be insufficient.
Regulation like the Network Enforcement Law in Germany address this issue by exempting companies with fewer than 2 million users while establishing significant fines for companies above that threshold that fail to remove obviously illegal speech. In this way, it takes advantage of both increased regulation on content moderation and competition policy, tackling online harms by allowing new entrants to face minimum government intervention and therefore less competition barriers. It remains to be seen, however, whether the German threshold allows a new rival to obtain enough scale to challenge a major incumbent that is enjoying positive network effects, increasing returns to scale and scope, an incumbency advantage arising from data at a global scale with users on the tens of millions per country.[23]
IV. Conclusion
As we remain physically distanced amidst the pandemic, internet services have become crucial for life, including social interaction, education and work, increasing the volume of communications and time spent online.[24] This increase on time spent online has unfortunately come with an upsurge in online risks.[25] Pressure for improved content moderation remains, and the need for more stringent regulation will grow more urgent among legislators and policymakers across the world, who will want to have a say in how decisions are made and pursue regulation over online speech.
Nonetheless, legislators and policymakers need to consider the effects on competition from any regulatory effort on content moderation, as entry and expansion barriers can be created in already highly concentrated market structures. Policymakers in each country face a collective action problem, whereby increased liability imposed in each jurisdiction may strengthen the market position of large incumbents operating globally by raising the costs of participating in the market for new rivals. Legislators and policymakers can work hand to hand with competition authorities and experts to strike the right balance and take advantage of increased state regulation and market forces in order to create an adequate mix of strategies and incentives to effectively tackle illegal and harmful content online. Exempting new entrants from liability on the basis of size may be a way to move forward.
Access the full version of this paper by clicking this link.
*Lawyer authorized to practice in Mexico and Senior Associate at SAI Law & Economics. Email: iramirez@llm20.law.harvard.edu
[1] See Daphne Keller, Internet Platforms: Observations on Speech, Danger, and Money, Hoover Institution, at 1, 8 (2018) https://www.hoover.org/research/internet-platforms-observations-speech-danger-and-money
[2] Abigail Abrams, Here’s What We Know So Far About Russia’s 2016 Meddling, Time (2019) https://time.com/5565991/russia-influence-2016-election/
[3] See Paul Mozur, A Genocide Incited on Facebook, With Posts from Myanmar’s Military, The New York Times (2018) https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
[4] Sasha Ingber, Global Effort Begins To Stop Social Media From Spreading Terrorism, NPR (2019) https://www.npr.org/2019/04/24/716712161/global-effort-begins-to-stop-social-media-from-spreading-terrorism. Also see Scott Higham and Ellen Nakashima, Why the Islamic State Leaves Tech Companies Torn Between Free Speech and Security, The Washington Post (2015) https://www.washingtonpost.com/world/national-security/islamic-states-embrace-of-social-media-puts-tech-companies-in-a-bind/2015/07/15/0e5624c4-169c-11e5-89f3-61410da94eb1_story.html; Eric Posner, ISIS Gives Us No Choice but to Consider Limits on Speech, Slate (2015) https://slate.com/news-and-politics/2015/12/isiss-online-radicalization-efforts-present-an-unprecedented-danger.html; and Drew Harwell, Three mass shootings this year began with a hateful screed on 8chan. Its founder calls it a terrorist refuge in plain sight, The Washington Post (2019) https://www.washingtonpost.com/technology/2019/08/04/three-mass-shootings-this-year-began-with-hateful-screed-chan-its-founder-calls-it-terrorist-refuge-plain-sight/
[5] Daphne Keller, Internet Platforms: Observations on Speech, Danger, and Money, Hoover Institution, at 1 (2018) https://www.hoover.org/research/internet-platforms-observations-speech-danger-and-money
[6] World Health Organization, Novel Coronavirus(2019-nCoV) Situation Report – 13 (2020) https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf
[7] For instance, in the Congressional antitrust hearing of July 29, 2020, where the CEOs of Alphabet, Amazon, Apple and Facebook testified, questions about political bias in content moderation policies arose. See Tony Romm, Amazon, Apple, Facebook and Google grilled on Capitol Hill over their market power, The Washington Post (2020) https://www.washingtonpost.com/technology/2020/07/29/apple-google-facebook-amazon-congress-hearing/; and Lauren Feiner, Big Tech testifies: Bezos promises action if investigation reveals misuse of seller data, Zuckerberg defends Instagram acquisition, CNBC (2020) https://www.cnbc.com/2020/07/29/tech-ceo-antitrust-hearing-live-updates.html
[8] See Paul M. Barrett, Who Moderates the Social Media Giants? A Call to End Outsourcing, NYU STERN, Center for Business and Human Rights, at 10, 11 (2020) https://static1.squarespace.com/static/5b6df958f8370af3217d4178/t/5ed9854bf618c710cb55be98/1591313740497/NYU+Content+Moderation+Report_June+8+2020.pdf; and Kyle Langvardt, Regulating Online Content Moderation Langvardt, Georgetown Law Journal, Vol. 106, Issue 5, at 1359 (2018) https://www.law.georgetown.edu/georgetown-law-journal/wp-content/uploads/sites/26/2018/07/Regulating-Online-Content-Moderation.pdf
[9] Adam Satariano, Britain Proposes Broad New Powers to Regulate Internet Content, The New York Times (2019) https://www.nytimes.com/2019/04/07/business/britain-internet-regulations.html
[10] See Kyle Langvardt, Regulating Online Content Moderation, Georgetown Law Journal, Vol. 106, Issue 5, at 1360 (2018) https://www.law.georgetown.edu/georgetown-law-journal/wp-content/uploads/sites/26/2018/07/Regulating-Online-Content-Moderation.pdf
[11] See OECD, Competition and Barriers to Entry, Policy Brief (2007) https://www.oecd.org/competition/mergers/37921908.pdf
[12] See OECD, Competition and Barriers to Entry, Policy Brief (2007) https://www.oecd.org/competition/mergers/37921908.pdf
[13] OECD, Competition and Barriers to Entry, Policy Brief, at 4-5 (2007) https://www.oecd.org/competition/mergers/37921908.pdf
[14] See Panayotis Kotsios, Regulatory Barriers to Entry in Industrial Sectors, Munich Personal RePEc Archive, at 2 (2010) https://mpra.ub.uni-muenchen.de/27976/2/MPRA_paper_27976.pdf
[15] See Panayotis Kotsios, Regulatory Barriers to Entry in Industrial Sectors, Munich Personal RePEc Archive, at 10, 11 (2010) https://mpra.ub.uni-muenchen.de/27976/2/MPRA_paper_27976.pdf
[16] OECD, Competition and Barriers to Entry, Policy Brief, at 3-4 (2007) https://www.oecd.org/competition/mergers/37921908.pdf
[17] Shagun Jhaver, et.al. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator, Association for Computing Machinery, Digital Library (2019) https://dl.acm.org/doi/10.1145/3338243
[18] Natasha Duarte and Emma Llansó, Mixed Messages? The Limits of Automated Social Media Content Analysis, Center for Democracy & Technology (2017) at 3, 4 https://cdt.org/insights/mixed-messages-the-limits-of-automated-social-media-content-analysis/
[19] Monika Bickert, Defining the Boundaries of Free Speech on Social Media, The Free Speech Century, Eds. Geoffrey R. Stone and Lee C. Bollinger at 265.
[20] The legislature was concerned of cases creating disincentives for online intermediaries to expand business. Findings stated that the Internet has flourished for the benefit of all Americans with a minimum of government regulation. See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, Harvard Law Review (2018) at 1604-09 https://harvardlawreview.org/wp-content/uploads/2018/04/1598-1670_Online.pdf
[21] When two incompatible systems compete, there is a tendency for one system to pull away from its rivals in popularity once it has gained an initial edge. This process is known as tipping in the economic literature and results in everyone using the same system. See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, The Journal of Economic Perspectives, Vol. 8, No. 2 (Spring 1994) at 93, 105-06 https://www.jstor.org/stable/2138538; Also see Luigi Zingales, et.al., Stigler Committee on Digital Platforms, Final Report, George J. Stigler Center for the Study of the Economy and the State, The University of Chicago Booth School of Business (2019) at 35 https://www.publicknowledge.org/wp-content/uploads/2019/09/Stigler-Committee-on-Digital-Platforms-Final-Report.pdf
[22] Id. at 39; and Jason Furman, et.al., Unlocking Digital Competition, Report of the Digital Competition Expert Panel (2019) at 38 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.
[23] For instance, in 2020 Facebook had 28 million users in Germany. See J. Clement, Leading countries based on Facebook audience size as of July 2020, Statista https://www.statista.com/statistics/268136/top-15-countries-based-on-number-of-facebook-users/
[24] Alex Schultz and Jay Parikh, Keeping Our Services Stable and Reliable During the COVID-19 Outbreak, Facebook (2020) https://about.fb.com/news/2020/03/keeping-our-apps-stable-during-covid-19/
[25] Internet activity for child abuse material has for instance increased and offenders are expected to be more active, as fewer moderation resources are available. See Jamie Grierson, Coronavirus lockdown raises risk of online child abuse, charity says, The Guardian (2020) https://www.theguardian.com/world/2020/apr/02/coronavirus-lockdown-raises-risk-of-online-child-abuse-charity-says