Are Algorithms Institutions?
Part three in an ongoing series

Part three in an ongoing series
To many, algorithms remain a mystery, a kind of hidden force that shapes the information they see on Facebook, or the results they get on Twitter (Hamilton et al, 2014). Many people are unaware that algorithms play any part in the information they receive: the better the algorithm, the less visible it is. Algorithms also play a hidden role when it comes to sorting the information that is used to judge and understand us, whether in the form of credit scores or insurance rates (Citron et al, 2014). Even for those who are able to use and access algorithms explicitly, that doesn’t mean they understand how they work, and thus may appear magical when it comes to the results or information that algorithms produce (Bucher, 2016).
Unfortunately, if we assume that algorithms are invisible and capable of magic, then there is certainly no way we might understand them, let alone hold them democratically accountable.
Philip Napoli (2014) argues that institutional theory is a helpful analytical framework by which to examine and study the growing role of algorithms. Institutional theory looks at how rules, structures, and systems shape behaviour, and establish social norms. Napoli argues that algorithms are themselves institutional by nature (341) due to the evolving roles and functions that they serve in the dynamics of contemporary media systems, as well as their role in a growing number of other sectors.
To help establish this, Napoli delves into institutional theory, and the various kinds of institutions, both formal, and informal. Specifically Napoli cites the work of Ronald Jepperson who defines institutions as also including informal routines, norms, rules or behavioural guidelines.
It is a mistake to think of institutions as being exclusively formal, or even material. Rather, an institution can be as informal, or immaterial as any other social structure we see emerging in the internet era.
For example, Christian Katzenbach (2012) argues that algorithms are institutions because of the way in which they constrain and facilitate how we receive, share, and consume media. On Facebook we don’t see everything that our friends post, but rather that which the Facebook newsfeed algorithm has selected for us. Katzenbach goes even further to argue that technologies have embedded political ideologies, that reflect how they’ve been designed and deployed. The Facebook algorithm has a bias, that reflects the beliefs of the Facebook engineers and designers who created it.
In this respect, algorithms are themselves institutions, serve institutions, and create institutions, in the form of the audience configurations, networks, and relationships that they create. If the “medium is the message” (McLuhan, 1994) than algorithms are both the path and the vehicle by which we engage with media. They produce both the content and the form by which we find and consume information.
A Response to or the Cause of Information Overload
Perhaps we take for granted the way in which the web and social media has created an information explosion, where there is far too much data for our brains to process, we have become reliant upon algorithms to sort through, categorize, and prioritize which information we will actually see and maybe read.
Nicholas Diakopoulos (2015) identifies this algorithmic power as embodied in the decisions we ask them to make with regard to prioritization, classification, association, and filtering.
Similarly, Philip Napoli (2014) notes that: “one of the key functions that algorithms perform in contemporary media consumption is to assist audiences in the process of navigating an increasingly complex and fragmented media environment.” (345)
This process of navigation results in a kind of institutional relationship. James Webster (2010) describes this in the way social media structures our consumption of media. He describes user information regimes as well as the co-operation that takes place between individuals and institutions with regard to what is consumed and what is not, i.e. what gets attention (Webster, 2011).
For example, trending topics rise in visibility because people are talking about them. And yet once a topic becomes trending, it garners even more attention because of the fact it is trending. This resembles a kind of feedback loop where users and algorithms dance together creating a structure by which attention is directed. The concept of virality is more about catching and spreading the bug then it is the value of the bug itself. We don’t share something with one person, but with everyone. We also do not necessarily share something because we like it, but often because it is easy to share and will evoke a desired response from our network.
Even journalists need to consume information as part of their process of production, and face similar challenges when it comes to the volume and complexity of information, and thus also turn to algorithms as tools to help in their analysis and reporting (Broussard, 2014). However again, this relationship becomes a structure unto itself as those same journalists abandon non-algorithmic modes of discovery in favour of faster and perceivably more convenient sourcing via algorithm.
The problem in such a situation is not knowing what is not there. While we may see algorithms as a response to information overload, they are also the cause. They reinforce their authority and institutional power as we use them, while increasing the volume of information we consume, increasing our dependence on their ability to sort, filter, and prioritize the information we consume.
Algorithms That Socialize and Normalize Behaviour
Taina Bucher (2012) looks at algorithmic power and the threat of invisibility on Facebook. The way the news feed creates a culture of attention that encourages users to conform to the logic of the algorithm in order to achieve the likes and attention offered by their friends, should their friends end up seeing the post. A situation that rests on algorithmic filtering.
Social media users have a range of incentives to post and participate on these platforms, the largest being the desire to get attention from our friends and social contacts. The algorithms have a specific logic that either facilitates or rewards this attention, and this logic directly influences what we post and how we interact (Bucher, 2016).
Algorithms are also employed as enforcement mechanisms, providing a first line of defense against abusive language and behaviour that sites have deemed inappropriate (Filmar et al, 2016). They literally possess and employ the power to disappear offensive content and users.
This interaction between individuals and institutions, as facilitated by the algorithm, creates an emerging institutional structure.
The example of Netflix’s recommendation engine is given, as something users rely upon to find content, and yet the engine depends upon users input to achieve accurate recommendations (Keating, 2012). The interface to browse through what Netflix makes available is deliberately opaque, whereas the recommendation engine is prominent and dominates the content discovery process. When it works we love it, when it fails, and recommends something we don’t like, we take notice, and wonder what went wrong.
Philip Napoli (2014) notes that “two of the primary functions that algorithms are performing in the media production realm at this point are: (a) serving as a demand predictor and (b) serving as content creator.” (348)
The demand predictor element of algorithmic production is well documented by Thomas Davenport and Jeanne Harris in their work around what people want and how to predict it (2009). As more data becomes available, media companies face increasing pressure to rely upon algorithms to help them understand the preferences and desires of their audience. Certainly those who do not shift their production model so as to incorporate data driven decision making are left at a disadvantage compared to those who do. For example Netflix has a huge advantage compared to other media companies given their significant ability to collect data about the content they produce and distribute.
Napoli (2014) argues that this is an example of “the institutionality of algorithms in that they facilitate and constrain the behaviors and cognitions of both media organizations and media users.” (353)
Napoli’s approach to institutional theory builds upon his earlier work on audience evolution with regard to new technologies (2011) and together uses these perspectives to provide a lens for looking at how the industry is changing and how media audiences are changing along with it.
Perhaps this is best embodied by the way in which the practice of journalism is being impacted by this institutionality of algorithms (Napoli, 2014). Specifically, the algorithmic conception of the audience is changing the way in which journalists produce content, and how that content is received (Anderson, 2011).
Rather than chase stories based on their merit, trending topics and popularity as measured by algorithms influence what content news companies will produce. The audience is no longer a silent entity, but rather those who tweet and comment have a growing influence as they get the attention of journalists and producers and have an undue influence on what is then deemed important and produced.
It’s not just professional journalists who are being shaped and driven by algorithmic media but all journalists, paid or not, citizen journalists included (Goode, 2009). Not to mention the rise of algorithmic journalism, articles written by software rather than humans, though they are still relatively few in number (Dorr, 2015). However, we should anticipate their growing influence and output by creating a sociology of algorithmic journalism to evaluate their work and impact (Anderson, 2013).
It should also be noted that there is also room in this context to look at how algorithms impact other work, explicitly the management of workers. While Uber is certainly considered a “big data” company (Hirson, 2015), the drivers interface with the company via an app, and the algorithm behind that app has considerable control over what they do, how, and when (Lee et al, 2015).
Of course, in the era of social media, users themselves are workers, as they provide the content and labour that drives the use of social media sites (Fuchs, 2010). The by-product of this unpaid labour is the personal information and data derived from users, which is the primary commodity that companies sell to advertisers and data brokers, comprising the bulk of their revenue and profits (Kang et al, 2011). The convenience and ease of using these sites, thanks in large part due to algorithms, are largely what distract users from understanding the value they provide, or their status as workers.
Reality shaped by Algorithms
Media have traditionally shaped our perception of reality, and now that algorithms have become so pervasive, so far reaching, algorithms now play a growing role in the construction of our reality (Saurwein, 2015). Their influence is to increase individualization, commercialization, inequalities, and deterritorialization, while also decreasing transparency, controllability, and predictability (Just et al, 2016).
The individualization comes in the form of customized and subjective interfaces created by algorithmically sorted newsfeeds and search results. The commercialization reflects the logic of the companies operating those algorithms, in that they are all driven by advertising, and the perpetual need to sell us more stuff. The inequalities reflect the inherent hierarchy of algorithmic media, in that they rank and sort users based on engagement, activity, and influence, The deterritorialization reflects the often agnostic approach to where a user is located, in spite of the personalization of location, as real time media is more about global reach than it is local relevance. The decreased transparency is a consequence of algorithmic opaqueness, which is also reflected in the decreased control, and therefore decreased predictability.
Therefore, the algorithm as institution also impacts other institutions, and in particular we should be concerned about those charged with governing or managing democracy.
For example, Robert Epstein and Ronald Robertson have been conducting research around the concept of the Search Engine Manipulation Effect (SEME) and the way in which biased search engine results can dramatically alter the outcome of an election (Epstein et al, 2015). As more and more voters turn to social media and search engines as a means of becoming informed about issues and candidates, the more they depend upon algorithms, and the greater the opportunity for those algorithms to impact their views, and thus the outcome of electoral contests.
A similar and rather chilling example of SEME is being piloted by Google in the UK as a measure to combat extremist ideology and terrorist activities. The company revealed to the British Parliament the ability to identify an extremist or terrorist based on their search patterns, at which point the search engine would replace the content the subject was seeking with information that sought to convince them to stop being a terrorist (Barrett, 2016). All done without the subject’s knowledge or consent. If this kind of manipulation becomes acceptable when dealing with some elements of society what is to stop the technique from being used on others?
In another context, there is growing interest in the rise of high frequency trading and the role of algorithms in capital markets. While proponents argue the technology allows for these markets to operate as efficiently and rapidly as possible, critics point to how these algorithms can and are being used to manipulate other market participants (Arnoldi, 2016). Should we embrace the notion of “buyer beware” (even if the buyer is blind to the manipulation) or should there be greater controls to ensure capital markets are fair and transparent?
What about the new and emerging phenomena of hack or flash crashes that result when algorithms react to erroneous information and cause rapid volatility in capital markets? One known instance of this was when the Associated Press Twitter account was hacked, and a fraudulent tweet caused $136.5 billion to be wiped out of the S&P 500 index within seconds (Karppi et al, 2016). While this is not the only flash crash that has occurred, it is one of the only instances where we can understand the cause and effect. How many more will occur before regulators recognize the power that algorithms possess?
When using the lens of institutional theory, we can begin to get a sense of the power of algorithms, and the growing institutional role they are playing in our society. This suggests that algorithms require an institutional response, if not a kind of regulation that powerful institutions are subject to. What form that takes place will be a significant political issue for us to address. Not doing so invites catastrophe. We ignore the rapid rise of powerful institutions at our peril.
However, regarding algorithms as institutions, specifically as media institutions, suggests we also look at the relationship between algorithms and audiences, in particular audience research.