In the current issue of the technology journal Logic, dedicated to the theme of “Bodies,” tech reporter Ali Breland examines efforts to make AI-driven policing technology more “inclusive.” Almost as soon as such technology – which includes facial recognition software and predictive policing systems – appeared several years ago, critics demonstrated that it was bias towards people of color. Governments and tech firms took notice, and now efforts are underway to correct this bias. As Breland reports, a division has grown among those fighting for criminal justice reform: some think that these AI technologies should indeed be improved, while others think they should be abandoned entirely, arguing that they will only give a “scientific” or “objective” veneer to a structurally racist policing system. You can read an excerpt from Breland’s essay below.
Yet as awareness of algorithmic bias has grown, a rift is emerging around the question of what to do about it. On one side are advocates of what might be called the “inclusion” approach. These are people who believe that criminal justice technologies can be made more benevolent by changing how they are built. Training facial recognition machine learning models on more diverse data sets, such as the one provided by IBM, can help software more accurately identify black faces. Ensuring that more people of color are involved in the design and development of these technologies may also mitigate bias.
If one camp sees inclusion as the path forward, the other camp prefers abolition. This involves dismantling and outlawing technologies like facial recognition rather than trying to make them “fairer.” The activists who promote this approach see technology as inextricable from who is using it. So long as communities of color face an oppressive system of policing, technology — no matter how inclusively it is built — will be put towards oppressive purposes.
Image via BBC.