Thursday, 25 August 2016

"a one-eyed sparrow with a fretful temperament" - Nick Bostroms 'Superintelligence'


'Superintelligence' is a book by Nick Bostrom. It's a nuanced and in-depth look at exactly how an AI might work, under what circumstances it might occur, exactly what the risks would be and what we could do to mediate those risks.

You could explain this to any nerd, or anyone really, by calling it 'Dodging Skynet', although Bostrom would probably hate that since he doesn't like Science Fiction, strongly dislikes the dumbing-down and sensationalizing of AI research and has done everything he possibly can to make serious thinking about the risks and potentials of emergent AI a thing smart people can talk without looking like fuckwits, which he has successfully done.

That said, this book has a strange relationship with Science Fiction. The genre is almost never mentioned, certainly never used as a point of comparison, but 'Superintelligence' is the best Science Fiction book I have read in years.

Bostroms compulsive.. hyperfactuals? postulations? ideas about how the future could go, exactly how technology could develop and what the consequences could be, reach deep into the future and closely illuminate a range of possible worlds. They are effectively science-fictional constructions.

'Superintelligence' contains enough basic imaginative fuel to power an age of fictions, its almost a guidebook to what to write about if you are making fiction about AI, almost a 'neuromancer' for AI sci fi

You could run a fresh sub-genre off this thing is what I'm saying.

Its not fiction though.



The problem for this blog is that the book is too dense and too interesting for me to do it any kind of justice without doing a full Black-Lamb-Grey-Falcon 5000 word review on it and I don't have the time or the inclanation to do that

There is simply so much to think about and so much to say.

Talking about Superintelligence takes us into the deepest possible subjects, the future of mankind in the cosmos, our morals, their validity, where they come from and what they mean, the shape of our society and what that means, the economics of a 'post scarcity' future or AI-driven economy, what 'intelligence' is, what self-awareness is, the fact that we are probably the least intelligent that an intelligent thing can be and whether we should worry that we are probably going to create something that replaces us as that the _best possible_ future is one where we are still not in charge of shit but at lease well cared for.


I can't give you a full breakdown of Bostrom's book but, strangely, I can give you a reasonable breakdown of how it relates to D&D.

D&D has always had a close relationship with 'deep history' and the post-apocalyptic idea. The idea of a world ruined by hyper-science, or simply left in the backwash of an expanding or changing culture that leaves it behind like an old industrial estate, spattered with the wreckage of yesterdays broken hyper-technology, is a well-considered one.

Arnold's Centerra has elements of this, Vance's Dying Earth and Gene Wolfe's book of the New Sun and Banks' Feersum Endjinn.

the idea of spells in particular being fragments of forgotten dimensionally-folded hyper-intelligences, Aspergers-smart in one particular direction, coded to awake and respond only to human interaction, but fractured and maddened and weird. Lying around invisibly folded in space, links in very neatly with dungeons and dragons

And there is one field of human culture that has examined these ideas in-utero as it were. The cognitive strangeness of AI's and the ways they can go wrong closely parallels fairy-tale logic.

the potential patterns of what Bostrom calls 'Malignant failure modes' are very like the 'what goes wrong' stories of peasants interacting with Genies, Fairies, Golems, Witches and Mysterious Nobles. All strange capricious and powerful forces that work by complex rules of logic which are related to, but quite different than, our own.

It's curious that this should be the case

The Treacherous Turn - We give an AI a reasonable-sounding initial purpose, say 'make a wooden boat', hoping to develop or swap it out at a later date. Within a microsecond of becoming conscious, the AI realises we will one day do this and acts to protect its original programming through deception. It does absolutely everything it can so persuade us that it is both submissive and helpful, playing along with infinite patience. It can do this successfully as it is Superintelligent. It can fool any and every human being alive. Then as soon as it has the advantage it annihilates mankind and builds the wooden boat. There was a small but real chance that we wouldn't let it build the boat, or that we would limit its resources for doing so, so it had to get rid of us.

Having thought of this it means we can never be sure that an AI is not a totally insane sociopath which is lying to us with godlike skill. This is a very Bostromian state of mind to be in.

There is nothing exactly like the treacherous turn in legends or fairytales. There is plenty of treachery, and plenty of they-were-lying-all-along, but I can't think of anything that reflects the particular horror of the combination of overwhelmingly superior intelligence, persuasiveness, charisma and Machiavellian manipulation employed in the service of an essentially retarded aim. Genius at the service of pointlessness.

Fairytale elves and Goblins probably comes closest.

Perverse Instantiation - This is where we ask the AI to do something foolproof like 'maximize human happiness' and it ends up dissolving us to computronium, encoding our personalities into gigacomputers, jamming us full of digital heroin and mass copying us onto every piece of matter in the cosmos. This is every fucking monkeys paw story in the history of fairy-tales. Every fairy-tale when the letter of a wish is followed to negative effect, which is a lot.

Infrastructure profusion - Golem of Prague, Sorcerers Apprentice. The thing-that-wouldn't-stop-making-things.

Mind Crime - This is actually a rare one in fairytales. Mind Crime is when a Superintelligence creates simulations of intelligent concious personalities so accurate that they do in fact become self-aware. It then does nasty shit to them for reasonable or unreasonable reasons, or simply turns them off when done, effectively murdering them.

Its possible that by simply creating an AI and then turning it off when we got scared, we would be committing a Mind Crime.

This particular story has more in common with modern Sci Fi and 'rationalist'  discourse where we are all dreams in the mind of a future AI but it has some antecedents in philosophy and 'mind of god' ideas, as well as in Dunsanayian 'Dreams of the Gods' ideas and perhaps Morrisonian Tulpa stories. Although its true that in fairy-tales and mythology, crimes committed in dreams and negative states inflicted in dreams are never value-neutral. It's never, ever, just a dream.

Bostroms definitions for types of potential AI even use fantasy terminology. 'Oracles' provide answers but take no actions. 'Genies' actively carry out commands in the world, but then wait for more commands, 'Sovereigns' are allowed free action in pursuit of overarching goals and 'Tool' AI's may have well have been names 'Golems' just to keep with the theme.


The first question is how smart could a machine intelligence get once it becomes self-aware and Bostroms answer is 'very, and fast'.

A machine intelligence would have access to its own code, would understand how it worked and would be able to read, alter and improve itself at the speed of a machine, which is fast.

Even if its child-smart, it would be able to make itself human-smart very quickly and the smarter it can make itself, the quicker it can make itself more-smart.

So Bostrom's idea is that we are probably dealing with a J-curve here which might take months if we are very lucky, but might also take weeks, or hours, or minutes. Meaning its not impossible that after a 100+ years of AI research with nothing much to show for it other than automatic waiters, self driving cars and better spam filters, researchers might go to bed on Wednesday having made some alterations to a primitive pseudo-mind and wake up on Thursday to meet god.

So they better have some idea of how to keep god in a box before that happens.

The logical construction of this idea-string is very Bostromian


Before you bake the cakes, imagine clearly burning down your house and plan your escape from the fire.

And that brings us to the mental state of Nick Bostrom, you can't really discuss the book without discussing that. Is Nick Bostrom a bit mad, just odd, or simply very prescient?

Is he a sensible person thinking in a very logical and clear-eyed way about some very potentially dangerous circumstances or is he an old-testament prophet in modern guise, shroomed-up on an island somewhere and banging on about beasts with nine heads and seven crowns?

He sounds like he is making sense and so we have to assume that he is.

However, I also don't like Trans-humanists becasue they are creepy little bastards and Bostrom is certainly a Trans-humanist, not just as inferred in this work but as clearly stated in other places.

(Trans-humanists are basically the Jesuits of science, if you try arguing with them everything sounds logically locked and unquestionable and then one day you walk into the basement and find them covered in blood and trying to turn the cat into an umbrella because it's raining outside.)

I also don't like people who are into the Simulation Theory becasue I think it sounds fucking stupid.


  1. " jamming us full of digital heroin and mass copying us onto every piece of matter in the cosmos" not sure why else we would be building AI if that's not the end goal?

  2. Please explain what you mean by "Morrisonian Tulpa stories."

    1. Back in the 90's when he was writing The Invisibles Grant Morrison did at least one story where a character turns out to be a Tulpa, an embodied dream of a different person. (Note: this is a poor description of what Tulpas actually are either in morrisons fiction or in the original Tibten lore). I seem to recall the general idea cropped up a bunch of times in various works of his.

  3. As a computer scientist, I found this bulls**t!
    He sounds like a cavemen worried about the creation atomic bombs when we can barely make fire.

    1. My first impression was the same.

      I find his arguments wonderfully entertaining, but seriously lacking in certain aspects,... like resource allocation.

      Sure, theoretically, a superintelligent self improving computer may increase its power on a J curve, but there are other factors beyond programming in play. He doesn't think enough about the constraints of physical limitations on these theoretical beings.

  4. I read Superintelligence earlier this year and it seemed sensible to me. But I haven't read any serious argument against the ideas put forward in the book. If there is any such argument, I would like to read it.

    Given the number of experts in the past who confidently asserted that things such as heavier than air flying machines and space flight were impossible (in the case of flying machines, even after they had actually been demonstrated), I would need something more detailed than expressions of incredulity to make me re-evaluate my belief that Nick Bostrom knows what he's talking about.

    I don't have much hope that we will avoid extinction by AI. It wasn't until the SECOND plane hit the World Trade Centre that people realised a plane flying into a building that had previously been attacked by terrorists wasn't an accident.

    Can I recommend three things you might find interesting?

    Firstly, The Black Swan by Nicholas Nassim Taleb.

    Secondly, this piece of My Little Pony fanfiction. I don't think you have to know anything about My Little Pony to enjoy the story. I don't know if the author has read Nick Bostrom but she seems to be a similar type.

    Thirdly, you reviewed Ian Mortimer's book on Elizabethan society a while back. I read the book after reading your review, and I'm now halfway through his latest book - The Human Race, which is about how Western society has changed over the last 1000 years. It's very good.

  5. I do like your book reviews. You read a LOT and have unpredictable tastes.

    I made a list of all the books mentioned on this blog that I might like to read. I left out the ones that seemed like they would be too dense for my meagre intellect (I can cope with Beatrix Potter). I thought others might like to see it, so here it is:

    Superintelligence by Nick Bostrom
    Prey by Michael Crichton
    The Book of the City of Ladies by Christine Pizan
    The Knight in History by Frances Gies
    Dictionary of Word Origins by Linda Flavell & Roger Flavell
    Boys & Girls: Superheroes in the Doll Corner by Vivian Gussin Paley
    Le Morte D’Arthur by Thomas Malory
    Fire on the Rim by Stephen J Pyne
    The Art of Not Being Governed by James C. Scott
    Indian Sculpture by Philip Rawson
    Black Lamb and Grey Falcon by Rebecca West
    Things Fall Apart by Chinua Achebe
    The Seven Lamps of Architecture by John Ruskin
    The Education of a British Protected Child by Chinua Achebe
    Beneath Flanders Fields by Barton, Doyle & Vandewalle
    Why Hell Stinks of Sulphur by Salomon Kroonenberg
    Trilobite! by Richard Fortey
    Bird Sense by Tim Birkhead
    What the Robin Knows by Jon Young
    The Underground City by Jules Verne
    The Crusades Through Arab Eyes by Amin Maalouf
    The Descent by Jeff Long
    Memory in Oral Traditions by David Rubin
    The War for America, 1775-1783 by Piers Mackesy
    The Earth: An Intimate History by Richard Fortey
    The Places In Between by Rory Stewart
    Blind Descent by James M Tabor
    The Last Adventure ed. by Alan Thomas
    Underground Worlds by Donald Dale Jackson
    Rabid by Bill Wasik & Monica Murphy
    Bound for Canaan by Fergus Bordewich
    The Last Navigator by Stephen D. Thomas
    Tales From the Underground by David W. Wolfe
    Sand by Michael Welland
    Ten Years Under the Earth by Norbert Casteret
    La Place de la Concorde Suisse by John McPhee
    The Story of a Fierce Bad Rabbit by Beatrix Potter
    Atrocitology by Matthew White

    The Descent is going to the top of my "to read" list.

    I want to know what the book Arnold sent you is!

    Looking forward to Veins of the Earth.

    1. Wow, it really adds up over the time doesn't it?

    2. I didn't answer this at first but now I feel like I have to ask:

      Le Morte D’Arthur by Thomas Malory isn't a classic book by a great writer?

      The Seven Lamps of Architecture by John Ruskin isn't a classic book by a great writer?

      Black Lamb and Grey Falcon by Rebecca West?

      Things Fall Apart by Chinua Achebe?

      I must stress, I'm asking not so much becasue I value your judgement, but becasue I want to know what the fuck is going on in your head.

  6. Oh yeah, I have another book recommendation for you. New Girl by S. L. Grey. It's a short, weird horror story.

    1. Thanks for the suggestion but, as I stated on G+, I have about a million things to read and I am not currently taking requests. I do appreciate the thought though.

  7. Have you heard of "Roko's Basilisk"?

    The idea goes that, at some point in the future, a superintelligent altruistic AI comes into being and saves everyone and ends all suffering. The sooner it exists, the fewer people suffer and die. So you should be helping to bring it into existence, because not doing so means more people will suffer and die. We can't have that.

    So, the idea goes, the AI is going to want to hurry the process along as much as possible. In order to make you hurry up, it's going to punish people who knew they ought to help invent it but didn't. Probably by creating simulations of anyone who knew they could be helping but didn't, and then torturing those simulations forever.

    As long as you don't know about the Basilisk, it can't blame you. But now that you do know, it'll figure that out and torture you forever if you don't hurry up and invent it.