For Public Books, Dave Haeselin interviews influential science fiction writer Kim Stanley Robinson, whose latest novel, Aurora, is set in the 26th century. It imagines the first human voyage outside our solar system, which takes place inside an AI-controlled ship. Robinson suggests that no matter how sophisticated AI becomes in the future, humans will remain in control. "We will remain," he says, "the responsible parties when it comes to history":
DH: In Aurora, you make the bold decision to tell much of the narrative from the perspective of the ship. To what degree has recent research into Artificial Intelligence inspired your depiction of Ship, or your understanding of the future of AI more broadly?
KSR: Possibly computer programs will be complex enough and computers fast enough that they might respond to us in ways that might pass the Turing test, but that’s a low bar. We are easily fooled, as we know because we do it to each other all the time. These talking computers will still be tools, and they won’t be conscious in the way human brains are, nor in a good position to act in the world—though as I say this I recall my AI in Aurora doing a fair job of controlling some actions in the ship. I suppose in that book I tried to think these things through to a limited extent, when imagining my ship’s AI telling the story, and then intervening to stop a civil war. But I was trying to imagine the consequences of a quantum computer, which we’re not sure we can build yet. That gave me license to imagine it as being far, far quicker and more complex than any computers we are likely to see in our time. That said, progress there is hard to predict. The main thing to say here is, they’re not the important thing; humans will still be deciding and making history, so we need to focus on that aspect of things. There will be no Singularity. We will remain the responsible parties when it comes to history.
DH: The characters you follow in Aurora eventually take up the mantra, “life is a planetary thing.” Ship struggles with the relationship of consciousness to its metal body. I wonder how these insights relate for you? Could AI doubters be thought of as “traitors to humanity’s reach” in the same way opponents of interstellar travel are in Aurora?
KSR: Yes, I think so. I sense that in asserting that humanity can’t inhabit the galaxy, much less the universe, and may only ever be healthy here on Earth, I’ve suggested a limitation that rubs some people the wrong way. They like to think of humans as transcendent, and once a religious afterlife is removed from consideration, the species going cosmic is the secular replacement for that religious yearning. Another place the yearning goes is into the space of the computer, with the idea we might someday download or upload our minds into artificial systems we’ve built, and concoct some kind of immortality. I think that’s a very bad misrepresentation of what we know about brains already, a kind of fantasy, or maybe I should just say bad science fiction. There’s a lot of bad science fiction, and a lot of it is harmless entertainment, but then there is the example of Scientology, and the frozen heads scam, to show what happens when people take bad science fiction too seriously. So I think it’s okay to look at these new stories, already clichés in our collective imaginary, and point out that some are good (health for all, permaculture, utopia) and some are bad (escaping Earth’s problems by way of impossible transcendences of various kinds). Distinctions can be made, and, also very important, new stories can be told. And new stories are fun.
Image of Aurora Australis via Public Books.