At Open Democracy, computer scientist and activist Dan McQuillan examines the social biases of emerging AI and machine learning technology. As McQuillan writes, these technologies embed existing political inequalities in supposedly “neutral” and “objective” algorithms. To challenge these technologies, he proposes an “anti-fascist AI” that “puts the perspective of marginalised groups at the core of AI practice and transforms machine learning into a form of critical pedagogy.” Here’s an excerpt:
My proposal here is that we need to develop an antifascist AI.
It needs to be more than debiasing datasets because that leaves the core of AI untouched. It needs to be more than inclusive participation in the engineering elite because that, while important, won’t in itself transform AI. It needs to be more than an ethical AI, because most ethical AI operates as PR to calm public fears while industry gets on with it. It needs to be more than ideas of fairness expressed as law, because that imagines society is already an even playing field and obfuscates the structural asymmetries generating the perfectly legal injustices we see deepening every day.
I think a good start is to take some guidance from the feminist and decolonial technology studies that have cast doubt on our cast-iron ideas about objectivity and neutrality. Standpoint theory suggests that positions of social and political disadvantage can become sites of analytical advantage, and that only partial and situated perspectives can be the source of a strongly objective vision, establishing a relationship between the inquirer and their subjects of inquiry would help overcome the onlooker consciousness of AI.
Image via Public Radio International.