Mountain Lion Activity Photo by Jesse Hirsh

Part six in an ongoing series

Authority plays a crucial role on the internet when it comes to deciding who to trust and what information is reliable or credible (Fritch et al, 2001; Rieh, 2002). Frank Pasquale (2015) goes even further by arguing in his book The Black Box Society, that authority is increasingly expressed algorithmically (8).

Pasquale builds upon Clay Shirky (2009) who was one of the first people credited with using the phrase “algorithmic authority”, as he described the trust that people place in aggregators like Google, Twitter, or even Wikipedia. However, Shirky describes a visible or consensual form of algorithmic authority, and we’ve since moved to a more pervasive form, as manifest in newsfeeds and algorithmically sorted media.

Do social media users even understand the trust that they place in these algorithms? Trust is salient when we know there is a risk. When the risk is invisible or misunderstood, people are blind to their own vulnerability (Luhmann, 1979).

Three examples that embody and employ algorithmic authority are Netflix, Uber, and Bitcoin. Each have different functions, operate in different sectors of the economy, yet all have in common algorithmic media as the basis of their authority.

Netflix

Netflix has rapidly risen from being a company that distributes media, to a company that shapes the media industry as we know it (Greene, 2016). Their data driven production model not only allows the company to strategically purchase and commission content that they know their audience will like, but also to provide content to users that will maintain and grow their subscriber base.

At the heart of Netflix’s success is their recommendation engine, and the algorithmic culture that it enables (Hallinan, 2016). It not only trains users to develop habits like “binge viewing” where an entire TV season or series is watched in a continuous session (Chmielewski, 2013), but also encourages us to trust and depend upon the recommendation engine (Amatriain, 2016).

The authority invested in the Netflix engine has not only created the phrase “Netflix and chill” to describe how people can reliably spend an evening that will result in sex (Roose, 2015), it also places increasing power in the hands of the company when it comes to deciding what will be produced.

Netflix has successfully leveraged this algorithmic authority so as to lead their industry, shaping both audience habits of consumption, but also how content is being produced and distributed across the global cultural industries.

Uber

Uber is often mistakenly described as a transportation company, but they are actually an algorithmic media company that focuses on gathering, analyzing, and using data for all sorts of business applications, of which transportation is but one (Hirson, 2015). Similar to Netflix, Uber is a media company, as their algorithm is the basis by which people interact with them, and their mobile app the primary interface.

From a media perspective, Uber uses their algorithm to control their drivers (Rosenblat et al, 2015), and this algorithm is the primary means by which drivers (and users) engage with the company. The power this algorithm wields over drivers is significant, and influences how drivers work and how they drive (Yadron et al, 2016). The company also uses additional algorithms, often without the knowledge or consent of users, to monitor such things as how fast the vehicle is moving (Hill, 2016).

Certainly the relationship between algorithm and driver is not new, as the phenomena of “death by GPS” (Milner, 2016) demonstrates the rather ridiculous and yet tragic example of people blindly following directions from their navigation system, to the extent that it leads them into harm’s way.

The authority of the Uber algorithm is considerable, and should be part of the ongoing debate around regulating the rapidly growing company. Yet because people focus on Uber’s impact on the taxi industry, they miss this growing algorithmic authority.

Like Netflix, Uber has converted their algorithmic authority into a kind of industrial leadership that has them setting the pace for the transportation and increasingly the logistics industry. Uber has already had an influence on public policy with regard to the regulation of taxis and limousines. It will be interesting to see what happens if and when they use the data they possess to influence other elements of public policy.

Bitcoin

Bitcoin is a distributed peer-to-peer cryptocurrency (Nakamoto, 2008) that, while gaining global notoriety, still remains a mystery to most people. However, for proponents of cryptocurrency, Bitcoin represents a kind of algorithmic authority that engenders greater trust than traditional financial institutions (Lustig et al, 2015).

Bitcoin is a free and open source software that employs an open ledger called a “blockchain” that is secured by strong encryption. This blockchain is what enables people to trust the algorithm. As an algorithmic media system, this authority governs a currency and financial transaction platform without the backing of a major institution or central bank. It is the algorithm (as media) that gives the system authority, and its transparency is what leads people to trust and believe it.

The success of Bitcoin’s form of algorithmic authority has led to increased interest in the blockchain software that drives it, with a range of similar applications that seek to translate this kind of algorithmic authority to areas beyond finance, including law, health, and general business (Swan, 2015). This also includes the desire to create a “DAO” or distributed autonomous organization that is a corporation entirely controlled and staffed by algorithms (Pangburn, 2015).

Certainly one lasting impact of Bitcoin and blockchain technology is to foster the understanding of money as media, and growing media literacy of financial and legal technology. This in itself reflects a challenge against the traditional authority associated with the control and management of currencies.

What Netflix, Uber, and Bitcoin have in common is a growing capacity to leverage algorithmic media to give them an edge in their respective industries. The data they collect and employ (via algorithms) gives them a competitive advantage compared to other entities that are not able to wield the same capacity. Each of these three represent a different kind of algorithmic organization: Netflix as a company, Uber as a network of suppliers, and Bitcoin as a decentralized autonomous organization. All three wield a form of algorithmic authority via algorithmic media to manage and control their institution.

The Ideology of Algorithms

These are three examples of algorithmic authority that demonstrate how this fledging concept is starting to spread and apply to different sectors of our society. Their growing influence demonstrate a need to critically assess the power of algorithms and the authority we increasingly invest in them.

Algorithmic authority evokes the notion of algorithmic ideology, as we employ traditional political theory to understand what agenda or interests lie behind the exercise of power. Search engines for example have become an indispensable and authoritative part of our society, as we turn to them to find all sorts of information, and trust the results that they offer.

Astrid Mager (2012) argues that search engines have a capitalist ideology embedded into their fabric and rules of operation with regard to how they treat information and manage advertisements. Mager argues that we need to move beyond the impacts of search engines to understand the social practices and power relations involved. Are users autonomous when it comes to their use of search engines or are they influenced by the way these engines co-display ads and results?

Certainly there have been attempts to measure the economics of algorithmic selection (Latzer et al, 2015), which refers to the way in which algorithms select certain information and not others. For example, if search engines do have an embedded capitalist ideology, this would be reflected in the results or items that are selected. Latzer’s work is fairly thorough, both in mapping out the various roles that algorithms play in this emerging media marketplace, but also the tendency towards media concentration (12). Combined with the risks that users face as a consequence of algorithmic bias, this raises real and legitimate needs for legislation to ensure fairness and transparency (27).

This is why it is also important to consider governance issues, not just around larger internet economies, but also the micro communities that comprise a huge part of the web itself (Rheingold, 2007). In 1996, Tamir Maltz began to map out the power dynamics that exist within internet communities, and the way in which these communities develop their own customary legal culture that allows for ongoing governance and stability. However, this particular type of research currently excludes the role of algorithms, which suggests this sort of scholarship needs to be updated to incorporate our growing understanding of the role of algorithms.

David Beer (2009) argues that the concept of algorithmic power can help us understand how communities and participatory web cultures are shaped and governed by algorithms. Beer is critical of the language used to promote and sell social media, and calls for greater discussion and debate around how that power should be regulated.

However, the issue here is that algorithms are playing a growing role in how we find information, and as a result we invest in them a kind of unaccountable authority, which shapes how our communities are structured and operate. If social media is about sharing information with friends, it is the algorithms that stand between us and our social networks, acting as an authority that determines what we will and will not see.

For example, Philip Napoli in a recent study that looks at social media and the public interest (2015), argues that “meaningful discussions, for instance, regarding if and how the public interest should manifest in algorithmic construction and operation have yet to take place.” (757) He joins the growing chorus of voices that argue we need a broader discussion about algorithm governance in the public interest.

An Ethics for Algorithms

Mike Ananny attempts this by beginning an articulation of an ethics of algorithms (2015). He builds upon the idea that algorithms are political and institutional. He also acknowledges that from an ethical standpoint, they’re moving targets, as there can be a difference between how they are designed, and their operational impact. However, Ananny does not insist on transparency, although acknowledging its role. Rather he argues that transparency is just one piece of better understanding the ethical and broader impacts of algorithms.

Daniel Neyland (2016) is another scholar who brings ethics into the discussion of algorithms, arguing for greater accountability with regard to algorithms, and the creation of ethical algorithmic systems. Neyland proposes that such systems would be possible given a focus on openness, transparency, and accountability, achieved via challenging the narrative associated with the algorithm, ensuring that it is accurate and informs the user of what it does and how it does it.

Certainly if we are to accept that algorithmic authority is actively playing a role in how people perceive their world, who they trust, and what they believe, then we need to look at issues of ideology, governance, and ethics (Wagner et al, 2015). If we are indeed governed by algorithmic authorities, should we not be able to hold them accountable if not actively challenge their authority?

As our authorities and systems of governance become automated, transparency becomes a requirement to ensuring they operate and act as we would expect them to and as we have consented (Pasquale, 2011).

If we were to examine algorithmic authority outside of a democratic system, then the notion of algocracy (Aneesh, 2009) becomes relevant. This describes governance by algorithms that reduces or eliminates opportunities for human participation (Danaher, 2016). It describes an automation of bureaucracy that begins by managing labour but expands to influencing almost all aspects of society.

Perhaps a chilling example of this is the use of algorithms by US intelligence agencies to determine who should be killed by Unmanned Aerial Vehicles (UAVs). As details of these programs became public, because of documents leaked by Edward Snowden, a number of experts warned that without algorithmic transparency to audit how these kill decisions were made, there’s reasonable grounds to believe many people were killed in error and without cause (Grothoff et al, 2016). As a growing phenomenon, “algorithmic war” transforms all sources of data into potential intelligence to be used by militaries as part of airstrikes and other ongoing campaigns (Amoore, 2009).

There’s also growing evidence that argues the existence of such algorithmic surveillance programs has a chilling effect on free speech and expression, as people aware of government surveillance choose to say less rather than be subject to suspicion (Stoycheff, 2016). Yet people who are not aware of such programs are still subject to this surveillance, illustrating how the “digital divide” impacts citizen’s privacy vis a vis their knowledge of such systems (Clark, 2016).

Beyond national security there are a growing number of algorithms employed by government agencies to assist in their work, a large proportion of which focus on combatting fraud and measuring compliance with government regulations (Eubanks, 2016). However, without transparency it’s not clear how effective these measures are, or how they may be unfairly targeting specific members of the population.

However, perhaps the algocracy is not yet entrenched, and instead we live in a hybrid or transitional system in which humans are still actively involved in governing alongside algorithms. Ian Bogost (2015) refers to this as a computational theocracy, where algorithms have a kind of religious or other worldly power, in concert with engineers and bureaucrats, who together are crafting this brave new world, all without our scrutiny or general knowledge.

Therefore, resistance to computational theocracy or algocracy involves transparency that would enable increased human access and participation. Otherwise we abdicate our ability to govern ourselves, as well as our ability to regulate or control our institutions.

Continued in Part 7