Grind it out Photo by Jesse Hirsh

Part eight in an ongoing series

Algorithmic transparency is a necessary prerequisite for a democratic society. Traditionally democratic societies have been based upon the rule of law. In order for this to be possible, the law had to be transparent. Any citizen had to be able to have access to the laws of the land, be able to read and hopefully understand them. While the legal profession exists to help people with this comprehension, there is still a general principle that any individual could, if they so choose, represent themselves in a court of law.

Algorithms similarly need to be accessible to every person. Obviously there will be an industry that exists to ensure that this access is coherent and reasonable, but just as with the law, algorithms need to serve the people, rather than the other way around.

Regulation of algorithms is inevitable. The question is who will be the regulator, and how to structure and govern the regulation. Will this be in the form of self-regulation, peer-based regulation, or state-based regulation?

In a democratic society, the regulation of media provides a relevant example as to how to both preserve the liberties enabled by the medium, while also mitigating or preventing their abuse. In this context, regulation is less about control, and more about ensuring that the public interest is preserved, and that users are not abused or harmed.

Algorithms enable the rapid production of media, and the rapid distribution of media content. This is particularly significant when it comes to how democratic societies seek to govern themselves.

While algorithmic transparency is a prerequisite towards understanding their impact, it also offers an ongoing method towards preserving and reinforcing democratic institutions and processes.

State-Based Regulation

Democratic governments at all levels would benefit from creating a ministry or agency that specialized in understanding algorithms. The creating of such an entity could allow for the passage of laws and regulations that seek to mandate companies and entities that use algorithms to detail their operation and impact. As growing interest and methods of researching algorithms emerge (Kitchin, 2016) it is crucial that governments contribute and benefit from such insights.

Such an agency or ministry would benefit from consolidating expertise among the public service, as well as attracting relevant talent from the political class, i.e. elected officials. The goal of such a ministry would not be control of algorithms, but rather facilitating their general understanding and engagement.

In this respect the goal is not just regulation, but also innovation. Once algorithms are deemed effective (and transparent) than it would be easier to see them adopted and implemented as widely as possible. Acting as a kind of clearinghouse, the ministry would be able to act as an advocate for business and citizens alike, helping them understand how to integrate algorithmic media and technology into their own practices.

Industry-based Regulation

Another scenario of algorithmic transparency involves industry self-regulation or peer based regulation. Rather than waiting for the public sector to lead, there are considerable advantages to industries that are producing or using algorithmic media to form their own regulatory structures voluntarily. Examples of this kind of regulation include both law and medicine, areas that require and benefit from regulation, and yet are largely done within their respective industries.

Concerns over intellectual property and trade secrets can be addressed. Algorithmic transparency does not have to involve the loss of competitive advantage or potential profit. Rather it ensures long term sustainability by helping to reinforce the trust and confidence of citizens and users. The regulation of traditional or legacy media had a substantial effect in reinforcing the credibility and authority of these media.

Therefore there are benefits to industry taking the lead and voluntarily embracing the concept of algorithmic transparency and related regulation. The longer industry waits for external actors to discuss and address these issues, the less influence they will have on the final regulatory outcome.

Citizen-based Regulation

In the era of the Internet there is also the third possibility of citizen based regulation, or peer-to-peer regulation that embraces a hacker or crowdsourced ethic. Building upon the open source movement, algorithmic transparency could emerge as a kind of social movement that unites both developers and users of algorithms.

For example, a similar agency as described in the context of state-based regulation could be created without any input or funding from a state. Such an agency could be crowdfunded and supported by Internet users from around the world, with the stated goal of protecting users regardless of what state they live in. We’re seeing such a social movement when it comes to efforts to protect personal privacy, and there’s no reason such a similar approach could not also be applied to algorithmic transparency.

Methods used to reverse engineer algorithms, along with incentives to developers to create transparent algorithms using methods such as Transparency by Design, could have a growing effect on our larger relationship with algorithms. Similarly citizen science, a research method that employs thousands of volunteers in science experiments could also be utilized so that users can help understand and influence how algorithms are used.

However where privacy has benefited from a number of sensational cases such as the Snowden files, algorithmic transparency as an issue remains limited when it comes to the awareness and priorities of most Internet users. There would have to be a substantial incident or shift in thinking for this to become a priority.

Academic or Expert-Based Regulation

Finally, a fourth option for the regulation of algorithmic transparency could come from academic researchers and other experts who are in a position to both understand what is happening and connect with broader citizen or democratic concerns vis a vis their regulation.

As it stands, academic researchers are currently the primary constituency raising concerns around algorithmic media, and making arguments around the need to regulate their use if not also their creation. Academics, combined with lawyers, could be in a position to litigate and regulate algorithmic media before the public sector, private sector, or social sector has the ability to do so themselves.

Unfortunately the danger or risk of such an approach is the lack of legitimacy such a group possesses, and the genuine need to get both industry and citizen buy in towards such a process.

Nonetheless there is a real need to conduct further research in this area, and certainly academics and subject matter experts are in a unique position to do so, in anticipation, or as a demonstration of the need of a larger regulatory framework.

For example, arm’s length foundations in addition to governments need to sponsor interdisciplinary research into the technical, social, political, legal, and ethical impact of algorithms on society. Specifically, an interdisciplinary research approach may produce results that go beyond disciplinary bias and potentially provide an evidenced based approach to regulation.

Similarly, Fenwick McKelvey (2014) calls for democratic methods as a means of identifying, exploring, and understanding algorithmic media, and there is much room to further develop and expand these methods. One means of doing so would be to employ methods from open source communities (Mulgan et al, 2005), treating democratic regulation of algorithms as a transparent socio-technical process involving a diverse range of actors all available on a collaborative platform like Github. In this regard, algorithmic transparency is a part of a broader democratic process, and not an achievable end in and of itself.

Nicholas Diakopoulos and Michael Koliska (2016) conducted a focus group with 50 participants across the news media and academia, seeking to build guidelines around algorithmic transparency in the news media. While the results indicated a number of steps within the production of news that algorithmic transparency could be enabled and disclosed, the research also suggested that human end users could be overwhelmed by such requirements.

Interestingly there are technical approaches to algorithmic transparency that are starting to emerge. For example researchers from Carnegie Mellon University have developed a method called “Quantitative Input Influence” (Datta et al, 2016) that tests an algorithm through a range of inputs. Essentially the method involves testing an algorithm using the data equivalent of brute force, feeding as many inputs as possible, to deduce which inputs have the greatest weight or causal effect. The result would be a transparency report that identifies any algorithmic biases, or confirms the lack thereof. The researchers also include thoughts on how to produce said transparency reports while also protecting the privacy of users.

There are critics however who argue that algorithmic transparency alone will not be effective (Filmar et al, 2016). The growing complexity of algorithms, and the code that runs them means that transparency has to be coupled with the expertise to understand and measure the impact of algorithms. Humans need to be involved. There is no winning solution in which the machines run everything on their own.

This suggests that the longer governments wait to enter into the policy debate of algorithmic transparency, the longer it will take and the harder it will be to actually understand and engage algorithmic power.

There is also the issue of how algorithms are dynamic and constantly evolving. How do you regulate something that constantly changes and is subject to upgrades? Certainly this has been the problem when it comes to Facebook and privacy regulation (Medzini, 2016). While the company’s privacy practices and policies have been subject to scrutiny and pressure from multiple jurisdictions, their constant change means that so far, the company has escaped serious penalty or consequences without actually altering their business practices.

Therefore, regulation of the Internet, and in particular, regulation of algorithmic media should be regarded as an ongoing process, comparable to a learning curve. Before regulation is even possible, governments and regulators need to get on the algorithmic learning curve and begin researching and understanding the impact of algorithmic media. Algorithmic transparency is a key ingredient in enabling this capacity among regulators and researchers alike.