Algorithms as Media
Part one in an ongoing series
Part one in an ongoing series

Our media, tastes, politics, policies, and trends are increasingly influenced by, if not dependent upon algorithms. An Algorithm is a recipe for calculating, processing, or sorting data. As a technique, algorithms have evolved in complexity so as to now enable automated reasoning.
As media, algorithms are enabling automated media production, as well as influencing how media is produced when not automated.
They enable an automated media production that extends the ability to create and distribute media to all sectors and potential audiences. A growing majority of people now use algorithmic media as their primary source of news (Gottfried et al, 2016) on social media sites and via search engines. As a result, we live in a kind of Black Box Society (Pasquale, 2015) where the opaqueness of algorithms reduces or eliminates media accountability.
The speed of algorithmic development is occurring at such a rapid pace that researchers are struggling to keep up with the study of its impact, effects, and growing role in our expanding media environment. Almost comparable to smoking cigarettes, which were once seen as sexy and cool, until we came to understand their catastrophic impact on health, so too are algorithms being embraced before we have an opportunity to understand their effects.
For example, deep learning and machine learning now threaten to place algorithms beyond the realm of rational explanation, and instead slip them into the world of magic and myth (Ziewitz, 2015). For example, Google claims they cannot explain how some of their current algorithms operate, as they’ve been developed using machine learning techniques established without transparency (Dhar, 2016). The application of deep and machine learning is expanding at such a pace, that little effort is being made to document or establish an audit trail of their evolution.
Therefore, algorithmic transparency has become an increasingly important public policy issue, that impacts researchers ability to understand how media production is changing. Algorithmic opaqueness also hampers a democratic society’s ability to properly regulate media and enable democratic participation. There is a false belief that algorithms, like data, are objective, math based, but like all technology, they are a creation of human beings, and as such, are inherently subjective.
This series seeks to first establish the relationship between algorithms and democratic publics. Then examine how institutional theory provides a frame towards understanding the impact algorithms are having on our democratic society. The overall focus is on algorithms as media, and in particular algorithms and audience research, in which the concept of automodernity provides valuable insight. Automodernity is the argument that we seek automation as a means of achieving greater autonomy, that in embracing the automation of media via algorithms, we are granting ourselves greater autonomy. Yet is that really the case?
Institutional theory opens up the perspective that algorithms have biases, that often serve the agendas of the platforms and companies that operate them. Facilitating a kind of algorithmic authority, the examples of Netflix, Uber, and Bitcoin are presented as ways in which algorithmic media are exercising new kinds of control and influence. Further the example of algorithms and public safety in the form of predictive policing emphasizes the need for appropriate regulatory response.
What kind of public policy response is required when it comes to the rise of algorithmic media? A range of options will be considered, with special emphasis on algorithmic transparency.