Canadian Parliament Photo by Jesse Hirsh

Part seven in an ongoing series

Media has traditionally been subject to regulation. This is true both in democratic as well as non-democratic societies. There are a range of reasons as to why media is regulated, however the primary justification is to ensure that the public interest is preserved amidst the private interests that own and control the media (Feintuck et al, 2006; Lunt et al, 2011).

However, media as a concept and as a technology has rapidly evolved over the last few decades, and traditional regulators have struggled to keep up.

The regulation of the Internet has resulted in a diverse public policy approach to dealing with digital issues. On the one hand there are efforts to understand how China (aggressively) regulates its Internet (Endeshaw, 2004), how Europe balances desires for global trade with the need for democratic controls (Radu et al, 2015), how US promoted self-regulation needs to include citizens/consumers (Marsden 2008), how Brazil has adopted a civil rights framework for the Internet (Medeiros, 2015), and how a global regulatory framework might be fostered (Von Bernstorff, 2003).

Frameworks for Regulating the Internet

Lyombe Eko (2001), in trying to map out the various kinds of regulatory approaches to the Internet, created a typology of five different models that describe various government approaches:

  • Internationalist, which seeks to foster international cooperation and governance. This kind of multilateral approach has had a home in the International Telecommunications Union (ITU) however it remains an approach countries rarely use, and has often been a forum for countries opposed to US dominance of the Internet.
  • Neo-mercantilist which focuses on commerce and is otherwise laissez-faire. This is generally regarded as the US model, which focuses on copyright, protection of intellectual property, and sees the Internet largely in economic terms, at the expense of political and cultural concerns.
  • Culturist, which focuses on protecting and promoting national culture in a global media environment. The French government is regarded as one of the primary proponents of a culturist approach to Internet regulation, although Canada has dabbled in this traditionally, though less so when it comes to the Internet.
  • Gateway, which seeks to control the national connection to the global networks. This is the model employed by authoritarian countries like China, Iran, Egypt, and Turkey who impose restrictions on Internet access and content via control of national and local Internet gateways.
  • Developmentalist which seeks to use technology within a traditional focus on national development. This model involves subsidizing parts of the Internet with regard to access as well as infrastructure so that the technology can be used to increase the quality of life and opportunities available to citizens.

There are also attempts to integrate crowdsourcing and large scale consultations so as to enable a participatory approach to regulation (Radu et al, 2014).

Certainly in the wake of the Edward Snowden revelations that gave glimpses into the widespread US surveillance of the Internet, there has been renewed efforts by non-US countries to revisit what regulation of the Internet entails (West, 2014). One of the phrases that has emerged is “data sovereignty” which expresses a desire among countries to have regulatory authority, and the ability to resist US surveillance. The concern however is that this desire for data sovereignty is a direct threat to a free and open global Internet (Polatin-Reuben et al, 2014). Yet what if this concern is not only overblown, but an actual obstacle to proper Internet regulation. There is considerable opportunity and need for individual nations to take regulatory responsibility for emerging technology, in particular the Internet (Goldsmith, 2000).

Against Cyberanarchy

Jack Goldsmith, a Harvard Law School professor has written extensively on the need for regulating new technology and the Internet. He talks about three persistent fallacies (1997) that plague regulation of the Internet: treating it as a separate place, treating it as a non-territorial place and thus beyond the reach of territorial governments, and the false notion that the Internet is cheap and plentiful, and that issues such as access will not apply.

While his original paper on the subject was authored almost twenty years ago, these fallacies are only now starting to be properly dispelled. His paper “Against Cyberanarchy” (1998) became a rallying call for proponents of public policy and regulatory authority, as more people recognized the need for laws that govern and regulate emerging media environments.

For example, data privacy laws are now starting to emerge around the world and play a role in the regulation of Internet related media (Greenleaf, 2015). While many of these laws are new, and still subject to judicial scrutiny and court precedent, cases in Europe are certainly having an impact on the rest of the world’s understanding of how privacy impacts Internet regulation.

What’s missing however is a broader attempt around automated and algorithmic media, and the need to anticipate and incorporate these emerging kind of media into a broader regulatory framework. A growing number of voices and research studies are emerging that are attempting to address this issue, including Edith Ramirez the US Federal Trade Commission Chair, who has indicated the agency’s interest in both algorithmic transparency and how algorithms can be manipulated (Quinn, 2015).

For example, Ben Wagner (2016) examines the question of how the algorithms embedded in software are governed. Specifically, he cites the case of Volkswagen, and the scandal that erupted when it was discovered that their algorithms were manipulating emission results (Burki, 2015). Automotive regulatory authorities did not and still do not have access to the algorithms that control automotive computers and software. Therefore, how can they be effective regulators? Wagner (2013) has previously looked at how media content regulation has emerged as one form of acceptable regulation of the internet, and he anticipates how it can be applied to algorithms.

Laura DeNardis (2012) writes about infrastructure mediated governance, and the hidden levers of Internet control. This can certainly be applied to algorithms, often considered “trade secrets”, protected by intellectual property laws, but still in a position to control or hinder free expression. Therefore, left to their own devices, these algorithms can have a regulatory effect on the population, rather than the reverse.

In 2018, the European Union will be adopting a new General Data Protection Regulation that will restrict algorithms from making decisions that significantly affect users. In particular any kind of discrimination will be prohibited. Further users will also be entitled to a “right to explanation” with regard to when an algorithm does make a decision about them (Goodman et al, 2016). It is not clear however if these measures will be effective, and suggests an important follow up study once the new laws take effect.

Similarly regulators on both sides of the Atlantic have been wrestling with “network neutrality” the concept that Internet Service Providers (ISPs) should not discriminate when it comes to the traffic they carry. Network neutrality as a concept arose in response to attempts by carriers to employ algorithms that perform Deep Packet Inspection (DPI) to manage traffic and capacity as effectively as possible (McKelvey, 2010). Critics however argued that said algorithmic traffic management amounted to a kind of discrimination as the software would make decisions as to which network traffic to allow unfettered, and which should be slowed down or interfered with. While this regulatory debate remains ongoing, it did reflect a general sentiment among the population that they did not want access to the Internet impacted or manipulated by hidden algorithms.

An Agency for Algorithms

Because algorithms possess a kind of technical complexity that makes it difficult for traditional regulators to fully understand and govern them, scholars like Andrew Tutt (2016) argue that they require an entirely new and dedicated agency. Tutt argues that criminal law and tort regulatory systems are not capable of handling the challenges posed by the need to regulate algorithms. He uses the US Food and Drug Administration as a model for what a regulatory agency that governs algorithms might look like.

Tutt makes the case that said agency should have three primary powers: the ability to organize and classify algorithms into regulatory categories, the ability to prevent algorithms from being introduced into the market until their safety and efficacy have been proven through evidence-based trials, and finally the ability to impose disclosure requirements and usage restrictions to prevent algorithms’ harmful misuse.

Ryan Calo (2014) makes a similar argument as part of a larger proposed agency he describes as a Federal Robotics Division. Such an agency would comprise algorithms, artificial intelligence, and robotics in general. Similar to Tutt, Calo argues that the purpose of such an agency would be less about control, and more about enabling awareness and protections.

How such a regulatory agency would operate, let alone be formed, is a worthy problem to address, and the answer partly lies in looking at related public policy and regulatory attempts around the world.

It is however worth noting that while the regulations of algorithms is of growing interest, so too are the rise of regulating algorithms. As “open government” and “open data” become concepts embraced by public servants and public sector organizations, a natural extension of this logic is the use of algorithmic regulation (O’Reilly, 2013). Why depend upon politicians and public servants who can be corrupted, when algorithms could provide a regulatory role that is consistent and programmable?

Is there a risk that before we can regulate algorithms, other algorithms will regulate us? Is the revolution in regulatory powers about taming algorithms, or being tamed by them?

This is why it is essential that further research is conducted into appropriate public policy initiatives, the public policy regimes, and the regulatory approaches necessary to address the growing power of algorithmic media. We need to gauge what democratic societies need to prioritize in order to ensure democracy and algorithmic media can co-exist.

Evaluating public policy involves looking at effects and implementation (John, 2013). This takes time, and algorithms are evolving faster than regulatory agencies or related research. Rather than wait for governments to pass laws, and then research the impact of those laws, we need non-governmental initiatives that address algorithmic power, as well as institutional and non-institutional research that addresses questions raised by the power of algorithmic media.

For example, is algorithmic transparency even possible? Do we need to conceive of an alternative in case access is neither forthcoming, or, due to the increased complexity of algorithms, not even possible?

As more governments pursue broader policies and regulatory approaches to the Internet we will at least have frameworks within which such regulation is possible, however clearly more research needs to be done around responses to algorithmic media and the means by which they can be held accountable and reinforce democracy.

Until that time however, we are left with algorithmic transparency as the obvious, first, and necessary step towards better understanding the ongoing impact of algorithmic media.

Continued in part 8