How Reliable are Algorithms?

Oftentimes, the assumption many people are making is that what our digital devices or software is telling us must be the equivalent of objective truth as data are regarded as “neutral” and “objective”. This is particularly problematic as artificial intelligence can impact on people’s daily lives, while these algorithms, as they are often developed by companies, remain a trade secret and are not accessible. The algorithms remain a “black box” and the decisions made based on these algorithms unaccountable. Think of decisions made by a credit scoring agency about your financial status or HR job hiring decisions made by algorithms.

In the documentary “Unheimliche Macht – Wie Algorithmen unser Leben bestimmen” (in German) about how algorithms have an impact on our lives, the journalist Franziska Wielandt conducts a self-experiment with the software “Precipere” used by large corporate organisations in their recruitment process and is visibly uncomfortable speaking to a software that analyses her voice and makes inferences about her character, which result in suggestions to the recruiter based on the algorithms. She also interviews the developers of the software, who insist that the software is “fair and objective” and “eliminates the gut feeling” when recruiting.

However, we know from academic research that data and algorithms are not neutral at all, but reflect the social, political and historical circumstances in which it was established. The book “Sorting Thinks Out: Classification and its Consequences” by Bowker and Star (1999) brilliantly illustrates this point. As Bowker and Star state, classifications always contain moral and ethical choices about what to include and in so doing, highlight some point of view while silencing another (Bowker & Star, 2000). Kitchen in his book “The Data Revolution” and also in his article “Thinking Critically about and Researching Algorithms”, argues that data do not represent reality, but rather are constructing the world (Kitchin, 2017). Gitelman in her book “Raw Data is an Oxymoron” reminds us that that data need to be understood as framed and framing (Gitelman, 2013).

In other words, data and algorithms may reflect the bias that exists already in society, with potentially profound consequences. For example, Cathy O’Neil talks in her fascinating book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” about predictive policing and the way police use software to target crime. However, the software targets geography, not individuals. But the more crimes are detected in an area, the more police resources are poured in to this area, with the consequence that “…the policing itself spawns new data, which justifies more policing. Our prisons fill up with hundreds of thousands of people found guilty of victimless crimes. Most of them come from impoverished neighborhoods, and most are black or Hispanic. So even if a model is color blind, the result of it is anything but. In our largely segregated cities, geography is a highly effective proxy for race.” (Cathy O’Neil, 2017).

PRedictive Policing

Image: The Intercept

In her book “Automating Inequality: How High Tech Tools Profile, Police and Punish the Poor”, Virginia Eubanks talks about the social impact of automated decision-making in public services on people by including three case studies in welfare provision, child protection and homelessness services in the US. Her case studies highlight how the automated system fails to address the needs of the poorest and most vulnerable people. Louise Russell-Prywata, an Atlantic Fellow for Social & Economic Equity at LSE’s International Inequalities Institute, provides an excellent review of this book.

The work of the AI Now Institute has been inspiring in this regard to highlight algorithmic bias and their lack of accountability. The AI Now Institute is an interdisciplinary research centre at New York University that contributes to research on the social impact of artificial intelligence. They argue in their 2018 report: Around the world, government agencies are procuring and deploying automated decision systems (ADS) under the banners of efficiency and cost-savings. Yet many of these systems are untested and poorly designed for their tasks, resulting in illegal and often unconstitutional violations of individual rights. Worse, when they make errors and bad decisions, the ability to question, contest, and remedy these is often difficult or impossible.” (AI Now, 2018). The 2018 Symposium was definitively important with many super interesting discussions, so well worth watching.

AI Now Symposium

Image: AI Symposium 2018

Their latest report in 2019 “Discriminating Systems: Gender, Race and Power in AI” is talking about the lack of diversity in the development of artificial intelligence. However, the AI ethics board established by Google was dissolved after a lot of controversy as Google recruited among its members Kay Coles James, who was a strong advocate against LGBT and trans rights. Equally, the board had no real powers to intervene in decision-making processes at Google, so perhaps this was just more of a PR stunt rather than Google taking diversity seriously. Hopefully though, the work already done on the socio-political impact of algorithms and software will lead to better regulation, either within companies, or governments, in the near future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s