At Open Democracy, Dan McQuillan offers a “Manifesto on Algorithmic Humanitarianism.” It begins by observing that international humanitarian relief, like many activities that involve complex coordination, will soon incorporate machine learning technology. But McQuillan argues that if this development is inevitable, we must identify the inherent biases of machine learning and endeavor to counteract them. Here’s an excerpt:
1 – Of course the humanitarian field is not naive about the perils of datafication
2 – We all know machine learning could propagate discrimination
because it learns from social data3 – Humanitarian institutions will be more careful than most to ensure all possible safeguards against biased training data
4 – but the deeper effect of machine learning is to produce new subjects and to act on them
5 – Machine learning is performative, in the sense that reiterative statements produce the phenomena they regulate
6 – Humanitarian AI will optimise the impact of limited resources applied to nearly limitless need
7 – by constructing populations that fit the needs of humanitarian organisations
8 – This is machine learning as biopower
9 – it’s predictive power will hold out the promise of saving lives
producing a shift to preemption10 – but this is effect without cause
11 – The foreclosure of futures on the basis of correlation rather than causation
12 – it constructs risk in the same way that twitter determines > trending topics
13 –the result will be algorithmic states of exception
Image via newhistorian.com.