Regulating Algorithmic Bias

By Manu Gummi

The world today is increasingly turning toward autonomous technologies, which are becoming more efficient and effective than human activity. Algorithms, which play a critical role in the implementation of a system, effectively act as the system’s mind. At the heart of this effectiveness is the elimination of human error or bias. For instance, autonomous vehicles demonstrate a potential for significantly lower levels of error because they can neither be drunk nor fall asleep as a human could. During a hiring process, human recruiters can exhibit various gender or racial biases. An algorithmic evaluation process evaluates candidates based on the relevant criteria and, in theory, counters the biases of a human recruiter. However, can an algorithm itself be biased? If it can, are humans able to regulate that bias? Should they?

Why regulate?

The power of code is that it affects not only our communication experience and the way we do things, but also fundamental societal values. First, code is pervasive. Whether facilitating shopping for detergent online or informing prison sentences, algorithms drive the decision-making processes in most walks of life today. Second, code is malleable and entirely man-made.  The algorithm is only as good as the data it works with, as well as the model designed to analyze and present that data. Third, the malleability of code declines over time as our choices become entrenched, leading to higher costs of modifying the code. For example, an increasing number of people today shop online, and thus the practice of relying on online recommendations and reviews becomes increasingly cemented. Even if future preferences change, modifying the code that guides the current online shopping process will be a significant and expensive technological change.

Another obvious example of the pervasive power of algorithms is the problem of the internet “filter bubble.” Various internet tools like search engines, social media, and news outlets are now increasingly personalized based on individuals’ web history. Algorithms work to filter information and show the user information that aligns more closely with their history. This is a clear demonstration of algorithms based on human preferences ultimately influencing human preferences. Algorithms, by their very nature, have the ability to affect our fundamental societal values in an invisible manner. Ensuring the fairness of such a set of tools is vital.

The issue on which we are focused here is algorithmic bias. Algorithms are able to process a large number of inputs, variables, and data to make decisions with a level of speed and reliability that exceeds human capabilities. However, several recent cases in the financial technology and criminal justice fields have prompted a growing debate about the possibility of algorithmic bias generating unfairness along various discriminatory lines. The reality is, because algorithms simply present the results of calculations defined by humans using data provided by humans, machines inadvertently recreate existing human biases. Therefore, while we have established that the algorithm is consequential, and that algorithmic bias can significantly impact people, the question then becomes how to address the issue of algorithmic bias to ensure a fair technological society.

How to regulate?

In May 2014, the White House released a report exploring questions about what the government should and should not do to protect the rights of citizens in light of changing technology. Part of what this report brings to light is the  concept of normativity shifting from a traditional legal order to a technologically-managed order is critical. What ought and ought not becomes what can and cannot.

The steps taken by an algorithm depend on the author’s knowledge, motives, biases, and desired outcomes. Even “learning algorithms,”which extract patterns by analyzing available data, are based on the code that dictates the learning and the data fed to the algorithm. This fundamental connection gives rise to a complex challenge of distinguishing algorithmic bias from the human bias itself. In an intrinsically-technological environment, whether the act of eliminating bias is reasonably practical, or even possible, becomes the central issue. Even if we ought to, are we able to create a regulatory environment that can eliminate algorithmic bias?

Another critical dimension of the regulatory environment is a moral one. As iterated by Latour, moralists make an absolute distinction between must-do and can-do. In this case, the element of fairness determines whether a bias is acceptable or not, and that acceptability determines what must be done. However, it may be difficult to establish a common definition of fairness, due to a variety of competing interests and viewpoints. This makes the task of creating a common regulatory structure for a morally-aspirant community extremely difficult.

That leaves us with the option of taking a prudential approach as a starting point. What can we do, given the normative and moral limitations? Despite their widespread use and potential to impact the lives of many, it may be too early to establish an umbrella regulatory body for algorithms given their complexity. Nevertheless, an approach that is more granular and targets specific areas of application may be more practicable. That would mean oversight over those areas that directly impact human safety, and those that may result in discrimination. For instance, specific regulations governing the use of algorithms in the criminal justice system, where “risk assessments” or “evidenced-based methods” are designed to predict future behavior by defendants and incarcerated persons; or in autonomous vehicles, where the potential scenarios are clearly and definitively identified, are prudent and practical starting points in addressing algorithmic bias in the world today.

In conclusion, the issue of algorithmic bias exposes the challenges inherent to regulating technology. In a time of rapid technological change, our conceptual apparatus of law and regulation will be central to driving technologies that are not just innovative and efficient, but also fair and responsible.

Manu Gummi is a Master of Public Policy candidate at the Goldman School of Public Policy.