Monday, July 23, 2012

Help! I'm Spalding


I think I am about to join a cult.

This (http://facingthesingularity.com) is the clearest expression so far of the beliefs of this cult. Nothing in it seems surprising or new to me. All of it seems true, except the bit at the end where Luke imagines what a positive singularity might bring, which seems a bit pedestrian and conservative, a bit like an unimaginative Christian's idea of Heaven. And I suspect that Luke knows this perfectly well and is toning it down a bit so as not to scare people.

I believe, as I have mentioned before, that a singularity will occur in the near future, and that its most likely effect is to kill every living human and leave the universe boring, worthless and repetitive.

This belief appears these days to me as well founded as my belief in Fourier Analysis.

Which is to say that I don't understand it intuitively in the same way that I understand addition, but that I can examine every bit of it and see no obvious flaws, and that many of the component parts seem intuitively obvious. I wouldn't be surprised if the details weren't quite what I'd imagined, but I'd bet my life at very poor odds on the general framework.

When I first read about the idea of a paperclip maximizer it immediately struck me as obvious and unarguable and a very real threat.

I filed it in the mental box reserved for sexy doom scenarios which may very well be true but which you can do nothing about, and reacted in my usual way (Global Warming: Say Bollocks to It and Enjoy the Sunshine While You Still Can, etc..).

What I didn't initially believe is the idea that there might actually be something we can do about it.

After a couple of years of thinking about it, and reading the writings of Eliezer Yudkowsky, I'm starting to believe that there might indeed be something we can do about it. That we might be able to turn it to our advantage. To make a God who will act as we would wish a God to act.

And I certainly believe that if we don't, we're doomed. One way or another. We are acquiring more and more of the powers of gods, and seven billion half-witted gods aren't going to be sharing a single world in any great comfort as far as I can imagine.

------------

I have worked not terribly hard at all to build myself a pleasant and enjoyable life in a city I love with friends that I love, and I feel that if I ignore the coming Singularity everything will be great and I can carry on like this for the next thirty years and die confident of having lived a life as happy as any human can ever hope to live. Which was always the plan.

And that, sometime, probably after I'm safely dead, everything will suddenly go completely pear-shaped without very much warning, and everything that I cared about will suddenly cease to be.

To be honest, I am not terribly uncomfortable with that.

But if this 'positive singularity' can be pulled off somehow, then I might end up immortal, and happier than any human can possibly imagine.

So this looks a bit like Pascal's Wager. A very small chance of a very large reward.

The small chance has almost no dependence on my actions, and certainly no dependence on whether I 'have faith' or anything silly like that. So I could just carry on as is and reap the vast rewards anyway, if they're there for the reaping.

----------------------------

It occurs to me that I am being underconfident, both in my beliefs and in my abilities. Maybe there is something I can do to change the probability. One obvious thing I could do would be to work a bit harder and donate the extra money to the Singularity Institute.

A minute change to a tiny probability of a vast reward. Paid for by using time that I'd usually spend reading and thinking and watching films in some ghastly office working for venal idiots and giving the money away.

I never give money to charity. I tried occasionally in my youth, but I found that the charities respond to this by sending you vast amounts of disturbing literature about starving people and horrible diseases and endless numbers of emotionally affecting pictures of suffering animals and it had the opposite effect on me to that which was doubtless intended.

The Singularity Institute hasn't done this. It has confined itself to creating large quantities of entertaining philosophical argument and leaving it around where I can find it. For the sheer pleasure of reading Eliezer's philosophy I owe it something.

-----------------------

But Jesus! guys, I know what I'm experiencing here.

This is a religious conversion, pure and simple. This is what the founders of the Jehovah's witnesses must have been thinking, when they discovered the one true way to read the Bible. This is what the latter-day saints and the calvinists and the fucking scientologists for fuck's sake must feel like as their pathetic brains fall for the lame arguments of con-men who have found a clever way to extract money and power from bunches of bloody fools by explaining the mysteries of the universe to them in a way that they can actually "understand".

I don't fall for this crap! I take nothing and no-one seriously (except perhaps myself), and I can feel my natural contrariness and scepticism calling sadly to me as I contemplate jumping off the cliff.

Once I'm gone, I'm gone. Once I'm publically committed to this foolishness, I'm going to turn into a scary swivel-eyed fanatic who can't listen to counter-argument and won't accept that he's wrong out of sheer terror of looking like an idiot and admitting that he's thrown away his life in service to an idea that is just a bit stupid.

And the only thing I can think of is Doctor Who. I can't remember the episode.

The Doctor needs some keys, or something. They are locked in a safe. And there are Daleks coming, or something. And there is this guy, who is a decent and honourable man, who has sworn not to give the keys to anyone under any circumstances.

So he is understandably a little reluctant to give the keys to the Doctor. And the Doctor says 'And when you are standing on your burnt-out world in the shattered remains of your civilization, at least you will know that your personal honour remains intact.'

That seems a powerful argument to me. I think I am brave enough to look like a fool in front of myself if it might save the world.

And I really think it might.

Please help.

I need counterarguments. Read Luke's lovely clear summary of what I have come to believe and tell me whether it's just a load of horseshit for some easy reason I have missed.

Since I need counter-arguments, I am going to try and come up with some myself.

I have gone into King's College and I have hugged my favourite tree, that was my friend when I was an undergraduate, and I have asked it what I should do. And it responded without hesitation "You know what you should do. Yudkowsky is possibly right and no-one else even seems to care about the problem. Most people making counter-arguments are just obviously wrong."

Well. I am not enormously interested in the opinions of vegetation per se, but that lets me know that my unconscious mind has already gone over to the enemy.

Try again.

The Singularity. The Rapture of the Nerds. Eliezer Yudkowsky as the Messiah. Immortality. The End is Nigh. Give us Money. How much more pattern-matching to a bloody religion does a man need!?? Run away. Religions are bad memes that use minds to adapt to infect minds and clever sceptical people fall for them all the bloody time and what makes you think you are special? You have been predicting for ages that religion will evolve round humanity's new sceptical defences until it is capable of infecting any reasonable man again. You expected it to take longer, but maybe this is what is happening.

Nothing about me is special. That's what I'm scared is happening. Maybe if I fall for it I can actually help to make it even more convincing. I always thought I'd be a good priest if I actually believed in anything.

And yet. And yet. What if the Witnesses had been right? What sort of bloody fool would have ignored the evidence of his own mind and damned himself if they'd been right? I have to decide what the truth is, and I can only use my own mind to do it. And my mind is rubbish. That is the whole problem.

So maybe I should wait. Wait until the religious feelings have died down a bit. See how I feel in a year's time.

If I feel like this, there must be many other people who feel like this. Maybe once a few people give significant amounts of money to the SI there will be an avalanche of money going their way.

They seem like nice guys. If they win they'll save me anyway. I don't have to do anything.

Maybe I should bung them £1000 or so and be public about it, in the hope that that will maybe contribute to the avalanche beginning. I've been known to spend more than that on opera tickets, so it's not going to make me look that foolish if it turns out that SI are just a load of blowhards.

Trust your own mind John. It doesn't usually fail you in embarrassing ways. And it's not like you really hate embarrassment that much anyway. What the hell are you worrying about? Why is your whole brain full of flashing alarms and ringing bells?

There needs to be a name for this emotion. Conversion-terror or something. Perhaps we could play Liff with it and pick the name of a nearby village. Spalding.

I am in a state of terrible spalding. Please help. If you have counterarguments to singularity-nonsense that I haven't heard I need to hear them before I turn into a full-blown raving religious idiot.

7 comments:

  1. A psychiatrist friend asks: "Will
    these beliefs cause you to take any actions?" and "Does the idea worry you?"

    Actions: Almost certainly. Handing over money and probably trying to join the pyramid scheme in some way in order to persuade bright young persons to have a go at solving the problems.

    Worries: Oh yes.. See above. Suckered by millenial cult doesn't fit with my self-image at all.

    ReplyDelete
  2. Its drivel, isn't it? I followed your link and found:

    "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind… Thus the first ultraintelligent machine is the last invention that man need ever make."

    Pass over, for the moment, that his "definition" is so vague as to be useless, and try to work with it: we're to suppose that people are capable of creating this ultra intelligent machine. Constructing this machine is of unknown difficulty but it is (by defn) within human abilities; call it level 1. The ultraI machine is capable of acting at level 2, let us say. How hard is creating a level 3 machine? We don't know. It might easily be a level 4 activity. Or it might be a level 3 activity. Nothing allows us to prove that its a level 2 activity.

    All this talk of intelligence explosion is absolutely nothing but handwaving. There is no substance to it. It might happen; it might not. But none of the words are to the point.

    ReplyDelete
  3. William, thanks. That's exactly the sort of comment I was hoping people would make. I think I'm too far gone to spot the drivel any more.

    To this particular point, my response is:

    Evolution managed to design me. I reckon if evolution can do it, we as a species can probably do it, but we can make a design which is more flexible. (Because we don't have to do things like consider the routing of the birth canal or build it out of meat!)

    Almost any intellectual task I'm trying to perform would be improved by greater speed or larger working memory, both of which should be trivial if we can make a human-equivalent intelligence.

    That gets us to an ultraintelligent machine:

    All of a sudden you've got something like von Neumann designing computers but he can do a lifetime's thinking in a couple of minutes.

    At that point, who knows where it stops? Maybe there's a natural limit to how good an intelligence can be, but I can't think of any reason why we'd expect it to be something like '1000 times as good as us'.

    At any rate, how much damage could someone who was just me, but thinking 1000 times as fast as me do?

    I can suddenly take a new university degree every day. I never forget anything I've learned. In a month or so I can do anything that anyone in the world can do.

    That thought is terrifying isn't it? Even if it was actually just someone like me with this power.

    Come back at me. Shoot me down. You will earn my eternal gratitude if you do.

    ReplyDelete
  4. OK, so, two points:

    1. (re-iterating what I said before) This doesn't really address your last degree-a-day point, but it does address the "singularity" issue: what I was trying to say was, just because we're capable (suppose we are) of creating a computer 1000 times as smart as us (and suppose, that is as good as we can go), that the 1000-times-as-smart-computer is capable of designing anything smarter than itself. Or, perhaps there is some kind of intrinsic limit to intelligence (analogous to speed-of-light, perhaps). Or perhaps the maths goes the other way: instead of what you're hoping (we design something 2x us; it designs something 2x it; which in turn designs 2x that, and so on to infinity) perhaps we're only capable of designing something 1.1x as smart as us; which can only design something 1.01x itself; or some other sequence that converges not far above us.

    Personally, I think we're still so far away from getting close to this problem that its hard to talk about it sensibly.

    2. There is a nice discussion of similar issues in Godel Escher Bach, which you must have read. Which speculates that an intelligence machine might share our problems; like a fallible memory. And maybe it wouldn't be any faster than us at doing mental maths.

    ReplyDelete
  5. William, again, thanks! I really need help here and you seem to be the best source of it at the moment.

    1) I agree. We don't know the shape of the 'mind-design space', and we don't know how it's connected in the sense of A leads to B.

    Particularly we don't know if it tops out somewhere. It may well do, and I have no idea and neither does anyone else. (There are physical limitations to information-processing density, but a mind that was approaching them really would be a God!)

    But I find it very easy to imagine that if it tops out at all, it tops out way above our level, and that way above our level can be easily reached.

    Also the word 'hoping' is not accurate! The word is fearing.

    I'm not as confident as you that it's a long way off. I hope it is. If it's a long way off then we might be able to come up with some scheme for making sure it likes us. If it happens soon and whoever makes it happen hasn't taken precautions then we're just all dead.

    2) I'm ashamed to say that I've only read bits of GEB. The whimsical style drives me up the wall. Given that a lot of the cleverest people I know say that it's one of the best books ever written, I'll go and give it another go. Is there a particular place I should start?

    ReplyDelete
  6. 2. I'm relying on my 25-year-old memories, so can't quote chapter or verse. But my recollection is that its a fairly short discussion, but what he is suggesting is: when we think of AI, well, we think of a computer, so we automatically assume it can trivially do all the things that non-AI computers can do: like have perfect memory, very fast arithmetic, etc etc. But actually those really are all features of non-AI, at least at present. For all we know a real AI might have no better access to, say, memory, than we do. I'm fairly sure that is about the level of the discussion its not a proof or anything close, just an idea.

    1. OK, so we agree we don't know, and its more a matter of how we intuit about it. Hope / fear: well, the other half of that is, suppose we do figure out how to do it, but we make sure its in some computer (carefully not connected to the internet) where the only interface is a screen. It can be super-intelligent, and as malevolent (but nice-seeming) as it likes, but its stuck in there, unless we're dumb enough to let it out. Which, I agree, we might be.

    So perhaps you could imagine us at an interesting point, which I think various sci-fi novels have explored: we create, or meet, or perhaps we encounter an interstellar datastream that we can decode that leads us to, some super-AI. At this point we're safe: the AI tells us how to make nuclear fusion work safely, and that's great and safe, and it tells us how to make rockets-to-the-stars, and thats all safe. But would we be able to not get greedy, and make for it the next wonderful device, which does stuff we don't fully understand? Likely not.

    ReplyDelete
  7. Right, now you're thinking about AI safety.

    Yudkowsky appears to have been the first person to realize that the default position of a random AI is likely to be to destroy humanity.

    And he's also famous for his AI-in-a-box challenge, which addresses your point above.

    All this sort of thing looks pretty straightforward and uncontroversial to me.

    If AI's dangerous, then we have the following choices:

    1/ Stop Science (Can't be done, and I'd rather die)
    2/ Stop AI research (Can't be done, and we probably die of nanotech anyway)
    3/ Die in an AI related accident
    4/ Create a Friendly AI -> ??? -> Profit!

    I'm a fairly firm believer in 3 as the best choice, and it's clearly not going to happen until after I'm safely dead so whatever...

    4 would obviously be nice but I suspect it's beyond us.

    Recently it's started to occur to me that 4 might actually be feasible.

    Unfortunately the only people who think it's feasible are the SIAI, and they're pattern-matching to a Christian Cult so badly that it's freaking me out.

    And the other thing is that while most randomly chosen AI will wipe us out without any particular malice, a failed attempt to create a friendly AI is much more likely to create all sorts of fate-worse-than death scenarios.

    The problem looks important. Even if it turns out not to be, it's probably worth thinking about before we actually have to face it. It sure looks like a good cause, which I'd happily give my life to. But it also looks like the latest incarnation of Christianity, and screw that.

    And this is why I would like people who have not got drawn into all this, whose brains are perhaps still capable of scepticism when mine is not, to tell me what it is I have missed.

    ReplyDelete

Followers