back to

Hito Steyerl and Kate Crawford on stupid AI and the value of comradeship


At the New Inquiry, Kate Crawford chats with Hito Steyerl about automation, “artificial stupidity,” and the value of comradeship. They also touch on how the encroachment of digital tracking into our daily lives reveals not so much the all-knowing, all-seeing power of technology, but its alarming arbitrariness and vulnerability to error. Here’s an excerpt from the conversation:

HITO STEYERL. I think that maybe the source of this is a paradigm shift in the methodology. As far as I understand it, statistics have moved from constructing models and trying to test them using empirical data to just using the data and letting the patterns emerge somehow from the data. This is a methodology based on correlation. They keep repeating that correlation replaces causation. But correlation is entirely based on identifying surface patterns, right? The questions–why are they arising? why do they look the way they look?–are secondary now. If something just looks like something else, then it is with a certain probability identified as this “something else,” regardless of whether it is really the “something else” or not. Looking like something has become a sort of identity relation, and this is precisely how racism works. It doesn’t ask about the people in any other way than the way they look. It is a surface identification, and I’m really surprised how no one questions these correlationist models of extracting patterns on that basis. If we harken back to IBM’s Hollerith machines, they were used in facilitating deportations during the Holocaust. This is why I’m always extremely suspicious of any kind of precise ethnic identification.

KATE CRAWFORD. And that was precisely what the Hollerith machines did across multiple ethnic groups.

HITO STEYERL. If one takes a step back, one could argue that we need more diverse training sets to get better results, or account for more actual diversity on the level of AI face recognition, machine learning, data analysis, and so on. But I think that’s even more terrifying, to be honest. I would prefer not to be tracked at all than to be more precisely tracked. There is a danger that if one tries to argue for more precise recognition or for more realistic training sets, the positive identification rate will actually increase, and I don’t really think that’s a good idea.

Let me tell you a funny anecdote which I’ve been thinking of a lot these days: I’m getting spammed by Republicans like crazy on my email account. I’m getting daily, no, hourly, emails from Donald Trump, from Eric Trump, from Mike Pence, from basically everyone, Newt Gingrich–they are all there in my inbox. And I have been wondering, why are they targeting me? It really makes no sense. I’m not even eligible to vote, but I went to Florida recently and I think something picked up my presence in that state and combined it with my first name, coming to the conclusion that I must be a Latino male voter in Florida. So now I’m getting all these emails, and I’m completely stunned. But I prefer to be misrecognized in this way than to be precisely targeted and pinpointed from the map of possible identities to sell advertising to or to arrest.

KATE CRAWFORD. There is something fascinating in these errors, the sort of mistakes that still emerge. These systems are imperfect, not just from the level of what they assume about how humans work and how sociality functions, but also about us as individuals. I love those moments of being shown ads that are just so deeply outside of my demographic. Now Google has been giving people who they think are American prompts to go and vote tomorrow in the election. I’m getting that every time I check my mail. Of course, I’m not an American, I’m an Australian–I have no right to vote in the American election–but I’m getting constantly prompted to do so every day. Google has so much information about me–there’s absolutely no question that it knows that I am an Australian–but that connection between its enormous seas of data and actually connecting that to instrumentalize the knowledge is still very weak.

What you’ve hit on is a fundamental paradox at the heart of the moment that we are now in: if you are currently misrecognized by a system, it can mean that you don’t get access to housing, you don’t get access to credit, you don’t get released from jail. So you want this recognition, but, at the same time, the more the systems have accurate training data and the more they have deeper historical knowledge of you, the more you are profoundly captured within these systems. And this paradox–of wanting to be known accurately, but not wanting to be known at all–is driving so much of the debate at the moment. What is the political resolution of this paradox? I don’t think it’s straightforward. Unfortunately, I think the wheels of commerce are pushing towards ‘total knowing,’ which means that at the moment, although we see these glitches in the machine, they are very quickly getting better at doing that kind of detection.

There are so many ways we can be known. They are imperfect and they’re prone to these strange profusions of incorrect meaning and association, but, nonetheless, the circle is getting closer and closer around us, even while the mesh networks of capital are getting better at disguising their paths. I know you’ve also been thinking about crypto-currencies and what remains hidden and what is seen. Or take artificial intelligence. Artificial intelligence is all around us. People are unaware of how often it’s touching their lives. They think it’s this futuristic system that isn’t here yet, but it is absolutely part of the everyday. We are being seen with ever greater resolution, but the systems around us are increasingly disappearing into the background.

Image via the New Inquiry.