The Power of Platforms and the Biases of Algorithms
Part five in an ongoing series

Part five in an ongoing series
Algorithmic media have a kind of power, however this power largely remains invisible to the audience or user who instead interact with the information that the algorithm sorts and delivers (Tufekci, 2015). Where this power manifests is on platforms that make extensive use of algorithms to draw and consolidate the attention of audiences.
Using institutional theory (which regards algorithms as institutions), automodernity (which acknowledges the agency we bring to our use of algorithms), and the concept of algorithmic publics (the spaces created by the use of algorithmic media), we can start to map out a growing and potential field of research that attempts to measure and explore the growing role and influence of algorithmic media on audiences and therefore society. Especially when employed by platforms like Facebook, Twitter, and Google.
Tarleton Gillespie (2010) is a vocal critic of the evolving relationship between the new media industries and their audiences, in particular taking issue with the use of the word “platform”, arguing that it provides a false vision of “technical neutrality and progressive openness” (360) when the reality is quite the opposite. Rather than regard platforms as a level playing field we should instead regard them as what they are, slanted towards the interests of the owner, serving specific commercial needs and interests.
Gillespie argues in a later paper (2015) that platforms exercise considerable power in their ability to make interventions, whether that involves shaping how the platforms are used, or more importantly deleting content or users of the platform itself. The owners and developers of these platforms embed the logic of commercialism, and promote an ongoing narcissistic self-promotion as users compete for attention and audience.
These interventions, in particular the deletions of content and suspensions of accounts has a hidden but tangible impact on the public culture that forms around these platforms. They shape the way we use these platforms, and they establish social norms around what is and what is not acceptable use. Other researchers go even further as to argue that these platforms alter our perception of time, creating a “realtimeness” that is a byproduct of the never ending feeds of new content and information (Weltevrede et al, 2014; Kaun et al, 2014).
Emotional Manipulation
Perhaps one of the most controversial studies that illustrates the power of selection on social media was one that examined the transfer of emotional states via emotional contagion (Kramer et al, 2014). Facebook users who were shown positive posts, proceeded to make positive posts themselves. Users with negative posts then made similarly negative posts. While the study has faced considerable debate over its methods and ethics, it clearly demonstrates the power these platforms have (boyd, 2015). In particular their ability to leverage peer pressure, i.e. our desire to do as our friends do.
This emotional contagion experiment is also the rare instance in which Facebook shared the results of the ongoing research they conduct on their users. Whether their goal is optimizing their interface, or introducing new reactions beyond the like button, Facebook is constantly experimenting on their users, watching how they react to different content and stimuli (Frier, 2016). An algorithmic platform is by definition an ongoing experiment upon its users (Assar, 2016). Which is precisely why algorithmic opaqueness is an issue. As users, as subjects, we have the right to know how we are being experimented upon.
We place so much trust and blind faith in technology and algorithms that Gillespie also asks “Can an Algorithm be Wrong?” (2012) citing the instance of Twitter suppressing the #OccupyWallStreet hashtag in the early days of that particular social movement. This was an example of the power that algorithms have to determine what is not only worthy of the public’s attention, but also, supposedly, reflects the will of the people (Gillespie, 2012). Thus when a social movement is suppressed by an algorithm, as happened in this case, it raises the question of where else such algorithmic discrimination may occur?
Recognizing the bias of Algorithms
Solon Barocas (2014) is a scholar who focuses on algorithmic biases, their impact on privacy (2014), as well as the way in which the biases embedded in algorithms can impact their application in big data (2016). However, on a broader level Barocas is also articulating a framework to describe how algorithms govern our lives (2013).
Whether as a myth, an interface to how we consume information, or literal rules that sort and make decisions about us, the way in which these algorithms govern us is growing (Ziewitz, 2015). Solon Barocas is also attempting to articulate a regulatory response to algorithmic biases which may assist in the development of laws that would prevent algorithmic discrimination.
However, in order to articulate a regulatory response, we need to also understand the environment that would be subject to regulation. The growing role of algorithms when it comes to governance and the way in which they make decisions about us is spreading rapidly. Unfortunately, these systems are being developed and deployed within a technocratic or engineer centric perspective that pays little attention to context.
Kate Crawford (2016) seeks to counter this by promoting an agonistic pluralism that recognizes algorithms (and their platforms) do not operate in a vacuum, but are part of society and impact society. They are not purely technological constructs, but social constructs that impact our behaviour while making judgements about us.
Similarly, social media scholar danah boyd notes that the networked nature of algorithmic discrimination (boyd et al, 2014) is a social phenomenon, in that we may be judged as individuals, and yet we use these platforms for social means, and have associations that say more about us than what we provide on our own, vis a vis personal information:
“We live in a highly networked world, in which our social connections can operate as both help and hindrance. For some people, ‘who you know’ is the key to getting jobs, or dates, or access to resources; for others, social and familial connections mean contending with excessive surveillance, prejudice, and ‘guilt by association.’” (54)
Certainly guilt by association in the form of racial profiling is something that many communities are intimately familiar with and therefore it is interesting to note racial discrimination when it comes to online advertising (Sweeney, 2013). The same advertisement will be displayed differently, depending upon the perceived race of the subject, in certain cases assuming criminal association and background.
In response, there are attempts to create algorithms that focus on fairness, which includes racial fairness so that these sorts of biases are not part of the user experience or system operation (Dwork et al, 2011). However even these notions of fairness begin with ideological assumptions around power and inequality.
The design of algorithms can attempt to compensate for the biases of society, but in so doing, create their own. Algorithmic fairness can only work in a transparent environment, which most algorithms do not adhere to, given their complex and often confusing nature. There are methods being developed that seek ways outside of transparency to detect or discover algorithmic discrimination (Sandvig et al, 2014), however they are interim strategies that help support arguments in favour of transparency.
Frank Pasquale a law professor from the University of Maryland also addresses how algorithms discriminate and the need for them to be fair in his book “The Black Box Society” (2015). Pasquale not only looks at how algorithms shape our society, but also how they shape ourselves, how we bend to their logic, in order to find work, to find love, and in so doing conform to what the algorithm wants us to be, a/k/a our algorithmic self (2015). He also points out that all algorithms lead to scoring and therefore hierarchies, and that these emerging kinds of social status are not only invisible, but so too are the biases and ideologies that drive them.
However, what Pasquale fails to address is the agency we feel while using algorithms (vis a vis automodernity), and the way in which algorithms, as media, create publics. His legalistic approach leaves out the cultural impact and consequently the way we experience and engage algorithmic media.
We Are Defined by Algorithms
A great illustration of the notion of the algorithmic self is the work of Adrienne Massanari, a researcher and professor at the University of Illinois at Chicago who studies Reddit, a social-news and community site that has considerable online influence and power. Massanari writes about how the Reddit algorithm and design implicitly support anti-feminist and misogynist cultures (Massanari, 2015).
The logic of the site rewards the kind of activity and behaviour that trolls engage in. The algorithm helps to cultivate the toxicity, when instead it could and should be designed to do the opposite. It also enables the hierarchy that thrives on reddit, that allows power users to game the system and send content to the front page due to their high standing with regard to the platform and the algorithm.
Ted Striphas is attempting to articulate a broader algorithmic culture (2015) and in building up a theoretical and cultural basis for the rise of algorithms, worries about the loss of publicness and finds instead the emergence of an elite culture. His concern is that algorithms isolate audiences, and elevate elites, those most valued in an audience, to a distinct status, while relegating everyone else to lower levels. Is this meant to suggest the re-emergence of a class system or a high and low brow culture? Or the erosion of the notion of publics?
It certainly reinforces the notion that hierarchies are a byproduct of algorithmic media, as the software sorts through audience members and values their contributions and reach accordingly. Micro celebrities are a growing phenomena as these platforms give some users the ability to grow audiences and with it an emerging social power (Tufekci, 2013).
However, we remain stuck in a false neutrality, where not only do we deny the power these platforms have, we also ignore the emerging elite that is a byproduct of how these platforms operate and their embedded ideologies.
Robin Mansell (2015) argues that platforms are inherently political, biased, and specifically require a regulatory response that is “as innovative as the digital platform industry” (23).
What does this entail however? Algorithms regulating algorithms? How can regulatory agencies be as innovative as the platforms they might seek to oversee?
While there are modest attempts by regulatory agencies like the FTC in the US and Competition Bureau in Canada to monitor micro celebrities and their growing endorsement business, there is little attention otherwise placed on how regulators as agencies can and should adapt.
All this research opens a range of questions that interrogate the power and influence of algorithms, especially with regard to media audiences.
The false neutrality that is associated with platforms and technology is a dangerous kind of ignorance. Agnotology is an emerging field that studies the making and unmaking of ignorance (Proctor, 2008). Algorithmic media are shrouded in a kind of agnotology of their own as we willingly ignore the power of their platforms and the biases of their logic. However, agnotology, as a “sociology of things that aren’t there” (Croissant, 2014) does provide an interesting method for attempting to understand the role that algorithms play. At the very least it helps us begin to map out what we don’t know in a way that may lead us to what we need to know (Weiss, 2012).
One example of agnotology is the ongoing rise of algorithmic authority.