Artificial Intelligence is increasingly used in both the private and the public sectors to make important decisions about us. At the same time, equality and anti-discrimination laws protect us against any unequal, discriminatory, and unfair outcomes based on irrelevant or unacceptable differences (e.g., based on age or ethnicity). But what about harms resulting from algorithmic profiling based on groups that don’t map clearly onto existing protected groups? AI systems routinely allocate resources and make decisions on what adverts to show, prices to offer, or public services to fund, based on artificial “groups” — such as dog owners, sad teens, video gamers, single parents, gamblers, the poor — or any other conceivable grouping, based on correlations in people’s attributes and behaviours. These groups could be defined by parameters as seemingly insignificant as clicking and scrolling behaviour, and what browser you use. Worryingly, the algorithmic groups we are (silently) sorted into can be used to determine important decisions that significantly impact our lives, such as loan or job applications.
In her article “The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law”, forthcoming in the Tulane Law Review, Professor Sandra Wachter who leads the Governance of Emerging Technology (GET) Research Programme at the OII argues that these “algorithmic groups” should also be protected by non-discrimination law, and shows how this could be achieved in practice. Algorithmic groups do not currently enjoy legal protection unless they map onto an existing protected group — which they typically don’t in practice. In contrast to protected groups, algorithmic groups are often not based on immutable characteristics (like age or ethnicity), their attribution is often not completely arbitrary (think credit scores), members aren’t always victims of historical oppression (think sad teens), and these groups are not always socially salient (such as “people who scroll slowly”).
While traditional arguments for why we should eradicate inequality between groups often rely on criteria that don’t apply to AI (such as un/conscious prejudices or systematic oppression), the use of algorithmic groups can nonetheless cause harm by threatening our liberties and access to basic goods and services, including education, health care, and employment. At a basic level, this is exactly the kind of harm that non-discrimination law is designed to prevent. But the very difficulty of explaining why we should protect members of algorithmic groups shows how AI fundamentally challenges our assumptions about discrimination – and that we need a new theory to properly account for it.
Professor Wachter proposes a new theory of harm — the “theory of artificial immutability” — as one way we could expand the scope of anti-discrimination law to include algorithmic groups. She describes how these act as de facto immutable characteristics (similar to already protected, immutable characteristics like race and age), because of the way in which AI makes decisions, which is often opaque, vague, and unstable. Individuals can’t control whether they are in an algorithmic group, in the same way that a person can’t control their age or ethnicity. AI therefore fundamentally erodes the key elements of “a good decision” as reflected in existing law, namely: stability (AI is dynamic), transparency (AI is opaque), empirical coherence (AI is difficult to understand), and ethical and normative acceptability (AI still lacks clear social norms). To fix this problem, greater emphasis needs to be placed on whether people have control over the criteria used to make decisions about them.
We caught up with Professor Wachter to discuss the possible harms resulting from “algorithmic groups”, and whether and how non-discrimination law could be extended to protect them.
David: We already understand our environment to be structured into arbitrary groups, to improve efficiency (fast lanes, no suitcases on escalators, etc.). What is the difference between these examples of discrimination (based on non-protected characteristics), and your example of “fast-scrollers” being denied loan applications? Is the fundamental issue here about the harms that result (mild inconvenience for people with suitcases, major inconvenience for people denied loans) – rather than the actual act of structuring people into arbitrary groups per se?
Professor Wachter : That’s right, structuring people into arbitrary groups doesn’t have to be bad, and it happens all the time. Employers need to be free to hire people based on what skills they need, for example a particular education or work experience. However, in some contexts, the law protects us against decisions based on certain characteristics, such as age. A job could specify that you need to be able to speak Spanish, but not that you need to be below the age of thirty.
Obviously, anything could be a group (dog owners, Labrador owners, Labrador owners who speak Spanish), and I’m certainly not trying to argue that we should protect every conceivable group! It would be mad to ban all attempts at grouping in order to pre-empt any discrimination – to not allow advertisers to reach out to Spanish-speaking dog owners, for example. So yes – structuring is an important part of how society works. My view is to accept that grouping happens, and to try and protect against any harm that results, rather than prohibiting grouping in the first place. This is where we need to make sure that people are protected against decisions made above and beyond the current protected characteristics.
If a decision is being made that uses an ‘artificially immutable’ group (i.e., a group that is not in the subject’s control) and if that decision could negatively affect someone’s life chances, then the decision-maker should be required to justify their use of that group. These decisions will tend to occur in areas that really matter to our well-being, in things like education, hiring, health, criminal justice, immigration. The fundamental issue at stake here is whether the subject has control over what the law wants them to have, which is an equal chance at a successful outcome. If algorithmic groups prohibit you from doing this, then there is a problem.
David: You note that it may be impossible to establish the existence of longstanding oppression, or to show that certain groups receive better treatment than others. Yet, both of these things are prerequisites for protection under the law. How can we move forward?
Professor Wachter : Yes, in some cases it might be difficult to establish that emerging oppression exists because these groups are ephemeral and exist only for a short period of time. Today, you did not get the job because of your browser, but tomorrow it might be your shopping habits that stand in your way. It might also be difficult to prove systematic favourable treatment of another group, because AI decision making can appear to be random. It is not like Chrome users always receive better treatment. This is because algorithms do not always discriminate like humans. The grouping is not always based on prejudices or the idea of inferior worth, and AI does not look down on people. The way in which they do make decisions is often very complicated, and, in the paper, I highlight five features related to this process that contribute to creating these artificially immutable groups that individuals can’t control. AI decision-making is often opaque and vague; the decision criteria are often unstable. In some cases, we might not even have a human reference point for an algorithmic group, such as when an algorithm identifies certain clicking or scrolling patterns as we browse on our favourite websites.
Nonetheless, I stand by the basic moral point I made earlier; people using these groups to make important decisions need to be able to prove that there was no discriminatory harm as a result of an artificially immutable characteristic they are using. The main aim of my paper was to introduce this concept of artificial immutability and show how, to protect against this danger, we should expand non-discrimination law. I definitely see it as the beginning of a conversation on this topic, and further discussion needs to take place on which particular groups might get protected. It’s a tricky regulatory issue not to be too prescriptive, but that doesn’t mean it’s any less morally important.
David: You say that “AI challenges our assumptions about discrimination” — and this is clearly a hugely difficult and complicated area. How can we reinterpret and rethink our laws to meet this challenge?
Professor Wachter : I think we will need to rethink many of the laws to make them workable for the AI challenges ahead. I think that this bias problem can at least partly be solved using non-discrimination law. We intuitively see that those cases such as people’s job applications being rejected due to their web browser are instances of discrimination, and my paper thought a little bit more about why this is. My view is that, at a basic level, the aim of non-discrimination law is to stop threats to our liberties and our access to basic goods and services, such as education, health care or employment. It should also stop anything that prevents us from enacting our life plans. Algorithmic groups can be protected by recognising that they are artificially (or de facto) immutable.
Yet, we need more transparency to understand what type of systems are in use. We might not be aware that we are assessed, and discrimination happens behind our backs. The law can only help those who know that they have been wronged.
David: Given AI is here to stay (i.e. too efficient, too convenient, too tied up with finance and power structures to roll back from): what should our priorities be? I.e. what are the realistic, achievable tweaks to the current system that would result in greatest social protection and good?
Professor Wachter : It’s true that AI is very much here to stay, and that this should affect how we think about these issues. As you mention, it’s important to remain realistic about the current capabilities of AI and concrete steps we can take to regulate them, even if this can often be challenging given the level of hype around the industry.
Luckily, there has been a great increase in the number of groups interested in ethical issues in recent years. On the regulation side, the interest shown in the European Union regarding AI provides encouragement. It’s important, however, that, if such organisations are genuinely committed to AI ethics, they commit to robust definitions of the key concepts at stake. In addition, a number of civil society groups like AlgorithmWatch and Statewatch are doing important work in making sure marginalised voices are being heard in the debate. Although, partly owing to my background, I primarily focus on a legal approach to implement ethical principles in the paper, it is clear that a holistic strategy involving actors on many fronts will be required to progress the ongoing march towards safe and trustworthy AI.
Sandra Wachter is an Associate Professor and Senior Research Fellow at the OII, focusing on law and ethics of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law. She also leads the Governance of Emerging Technology (GET) Research Programme at the OII. She tweets at: @SandraWachter5
Read the full paper: Wachter, S. (2022) The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law. 97 Tulane Law Review.
Professor Wachter was talking to David Sutcliffe, OII Science Writer.
With thanks to Rory Gillis, Research Project Support Officer, Governance of Emerging Technologies programme, for his contribution in editing this piece.