Friday, July 27, 2012

Counterfactual Blackmail

So I just want to mention a couple more horrors that are new to me and that we have to deal with if we are going to think morally about Star Trek.

Counterfactual Blackmail (due, I think, to Yudkowsky/Less Wrong and allies)

Imagine you are Captain Kirk and you are heading home after a hard day's teleportation and hoping that not too many people will be there.

And you meet a Romulan!

And the Romulan says: Not only did we mess with the transporter, we made a tape of exactly how to build a Kirk.

And as it happens we have holodecks.

So we have made a lot of new copies of you, and they are all in holodecks, and all having the same experience as the real Kirk, who has just met a Romulan while walking home.

And in every holodeck, the Romulan is demanding 50p, and if they do not hand it over, the holodeck is going to turn into a horrible torture chamber and we are going to extract the maximum amount of pain from every copy of Kirk that we can.

So give me 50p, or else...

You now have to ask yourself, well, this all sounds plausible. Do you feel lucky? It is only 50p, after all.

Romulans are known for keeping their promises.

All Kirks will make identical decisions, obviously. Why would one of them choose differently?

If you hand over the 50p, then you are either going to feel a bit of a fool, or you are going to cease to exist as the Romulans turn off the holodeck and disassemble you. Which as we've previously discussed, you can't work out whether you object to or not.

If you bravely reply "Fie upon thee, counterfactual scrounging-man", then maybe you can afford a Mars bar on the way home, but most likely you are going to get tortured to death.

They don't lie much, the Romulans.

Zombie Interlude

Now to my friends I am sometimes known as Space Cadet. And although I have never got to the bottom of the mystery of why, exactly, they chose this nickname, I feel that it might be something to do with the long term effects of having all these transporter-thoughts crashing around in my head when I was about twelve years old. I remember the exact Art lesson in which I decided that I didn't exist.

Luckily my grandfather (to whom I owe much in many ways) had read enough philosophy to tell me about "cogito ergo sum". Which is a branch to hang on to when you are swirling down the creek and the problem is not so much no paddle as no water.

So thanks Rene Descartes and Donald Phelan because I think that without that it might have gone badly for my infant psyche. But even now I think I could have a decent go at convincing a jury in a capital case that I have no idea how to tell right from wrong.

My eventual conclusion was "There is a me. But it is the thing that hears my thoughts, not the thing that thinks them. The universe proceeds lawfully. And I get to watch, but I have no influence. Somehow what I am tags along with physical bodies, and if I got duplicated, then presumably it would tag along with one or the other copy, or who knows? Maybe it hops from place to place and time to time and person to person. How would I know? I am the thing that exists now. I have no evidence that I existed earlier. I have no evidence that I will exist later. I have no evidence that such an inner listener exists anywhere else. Maybe I am all alone and only exist for this millisecond and that's it. But whatever. None of this can affect how I act or need affect how I feel. I might as well just imagine that everyone else feels the same and it is all just a big mystery."

As a philosophy that worked out just fine for many years. It sails through the free will vs determinism debate that seemed to so exercise the philosophers of my youth.

It actually had excellent positive consequences. As a twenty year old, I didn't identify with my forty year old self at all. I felt free to raise hell and to do many unwise things. That could have all gone badly wrong, but it didn't. And as a forty year old I'm quite grateful for that, since I've got lots of interesting memories to play with in my private Cartesian Theatre.

Unfortunately it doesn't work. Because if I'm the inner listener who hears my thoughts, but cannot influence them, who the hell is moving my fingers in such a way as to write the theory down?

It took me a while to notice this problem. But when I did, I just gave up and stopped thinking about it entirely. Which is probably the sensible thing to do.

But others are clearly more mad than me, and carried on thinking when I gave up, oh yes...

Recently I have read Daniel Dennett's "Consciousness Explained", and although I am reasonably sure that "Consciousness Denied" would be a better title, it is a riveting book full of cleverness and by the end of it I am actually pretty sure that I myself do not have the Cartesian Theatre that I am perceiving directly as I write.

But I am still sure that I exist. And there is no room for me in Dennett's philosophy. Although it is very possible that I have failed to understand him.

And I only have Eliezer Yudkowsky's word for this, because I have not read David Chalmers' thoughts on this in the original (it's on the stack, ok?), but apparently David Chalmers is also even more mad than I am, because he kept thinking about the inner listener even though he noticed that it didn't just listen, and apparently the theory isn't quite killed by this observation.

But Chalmers has elaborated in great detail just how completely insane the universe would have to be if you had both an inner listener and a zombie brain that thought exactly the same thoughts whether they were true or not.

And that is very insane indeed. And quite complicated, so I'm not going into it.

On the one side, you got Eliezer Yudkowsky and Daniel Dennett, who are proper 'it is all billiard balls' materialists, and who are obviously correct. Except that they are denying the central fact that is the only thing that I directly know about the universe: "I think therefore I am." And I am pretty close to believing that they must be zombies. Except that if they are, how would they be able to let their brains know?

And on the other side, you got David Chalmers, who sounds like his personal experience of the world is like mine. Except that if it is, how does he let his brain know? He says that he doesn't. His brain writes it all down anyway without knowing whether it's true or not. So his theory is exactly the same as Dennett/Yudkowsky, except that it has this extra bit where I exist. Which sounds like an important and necessary bit, even though it leads directly to the complete insanity of the theory.

Earlier, I called Dennett's position 'wrong' and Chalmers' 'mad'. But to be honest I take them both very seriously indeed and I have no idea what to think except that I am very very very confused.

I group the 'wrong' and the 'mad' into the camp 'right-thinking', because they make identical correct-sounding predictions about what we will actually see and feel when we do stuff.

Straightforward Cartesian Dualism. That sounds like it might be right. Except that if we have a transporter then we have an experimental test for it, and I bet it isn't. And if the mind-stuff acts on the world then it doesn't help at all, because it's still lawful billiard balls. But maybe one of the things about primitive mind-stuff is that there's something that it feels like to be it. Pah! No way. That's bonkers too and it's not going to be that easy or we would have sorted it already.

Can Murder be Immoral in Star Trek IV?

So, in the Star Trek universe, if we are right-thinking people, it looks like we can't really decide whether murder is wrong or not.

If Kirk wants to save the world while not breaking a lunch appointment, he can just send a copy of himself off to die heroically, and go for lunch. The copy can feel free to die heroically in a good cause, and if he doesn't, he can teleport home safe in the knowledge that he will never get there.

Oh come off it! At some point we have to draw the sodding line and say that randomly creating copies of people and then painlessly killing them is wrong, don't we?

But notice that there is no difference at all in the predictions of our various theories of consciousness and personal identity.

The only one that made a difference to what we anticipate seeing is Cartesian Dualism / Souls. The first time you operate a transporter you get a guy going "Grr.. Aaargh.. Brainszzz.. ", remember?

So we ruled that one in or out early doors.

All the other theories are about what it feels like to be transported. But we can't find out without actually being transported.

All the other theories make exactly the same predictions about what other people will say about the experience. They will say things like "Relax, don't worry. You step into the transporter, and then you step out on Mars. Doesn't hurt a bit. Feels like your soul comes along with you."

They don't even make any predictions about what you remember. You remember being transported hundreds of times. It's always fine. Works just like it says on the adverts.

And if the right-thinking people are right, that's just what's happening.

And if they're not, when you step into the transporter you die.

Won't actually make any difference, mind. You'll get out the other end and say: "No, feels fine.", "Looks like materialism is the way to think about all this.".

And you'll still be thinking along these lines when you get home and find the real you fucking your wife.

Can Murder be Immoral in Star Trek III?

Well, you don't kill Captain Kirk that easy, Romulan filth.

Of course he gets out of the transporter booth. He just nuts it until the plasteel breaks.

He determines that his copy is on the mother, saving the world-wise, and knocks off for an early lunch.

His wife is pleased to see him back.

When New Kirk has saved the world, he takes the transporter home. Anticipating a certain frostiness at home, Spock and Scottie frig it so that only the disassembler works, and no new Kirk appears back on earth to spoil the happy reunion.

Do you notice what has just happened here? Captain Kirk has just saved the world by having lunch, and Spock and Scottie have just murdered him for his own good.

But the final state of the world is not terribly different from what would have happened had the transporter worked fine. There is a lunch missing. Captain Kirk doesn't remember how he saved the world. Those don't look like major differences to me.

So that is pretty much that for utilitarianism and indeed all forms of consequentialist moralities, I would say.

And maybe next time you approach a transporter booth, you will be a little bit worried?

Especially if you are Kirk. Romulan spies are everywhere.

Thursday, July 26, 2012

Shangri La Experiment: More Priors

Forgot that I should write down the predictions of each of my three theories in advance.
I did make some in my head, but I never got round to writing them down.

As I remember my prior predictions were:

There'll be two measurable variables:

Favourite Belt Notch (1,2,3,4,5,6) and Perceptible Loss of Appetite (yes/no)

I'll build prior probabilities by distributing 100 points around the likely predictions, and then putting 5 points in neighbouring cells, and 1s everywhere else.

Willpower (no loss of appetite, some increased girth)

     1   2  3 4 5 6
yes  5   5  5 1 1 1
no   75 25  5 1 1 1

total 126

Shangri-La (huge loss of appetite, loss of girth)

    1 2  3  4  5 6
yes 1 5 50 25 25 5
no  1 5  5  5  5 5

total 137

Helplessness (some loss of appetite, may or may not be noticeable, no change of girth)
    1   2 3 4 5 6
yes 5  50 5 1 1 1 
no  5  50 5 1 1 1

total 126

So for instance, if at the end of the month I am on belt notch 2, with noticeable loss of appetite, which was my actual prediction, then I can Bayesify matters thus:

prior H60:W39:S1

likelihood ratios H 50/126: W 5/126: S 5/137

new beliefs H 60*50/126: W 39*5/126: S 1*5/137

H 60*50*137: W 39*5*137: S 5*126

H 411000: W 26715: S 630


H 937: W 60: S 1

Which looks right. If the experiment agrees with my pet theory, then I should believe in it lots.
If it disagrees with a mad theory that I don't believe in to start with, I should pretty much give up on that theory, and Willpower is getting dissed because it doesn't say I should notice a loss of appetite if I eat more.

Of course, loss of appetite is highly subjective, and I'm going to be tempted to try to make my experiment come out the way I want, which is essentially 'go, go Shangri-La, fuck off Willpower', so I should also consider what goes on if I report 'no loss of appetite, notch 2'

prior H60:W39:S1

likelihood ratios H 50/126:  W 25/126: S 5/137

new beliefs H 60*50*137: W 39*25*137: S 5*126

H 411000 : W 133575 : S 630

H753: W244: S1

Which also looks right. W is getting penalised for favouring weight gain that didn't happen more than H is getting penalised for hedging its bets on appetite loss. S is getting kicked out into the dark for getting it wrong on both counts.

etc. etc.

Can Murder be Immoral in Star Trek II?

So last post I came up with some semi-plausible bullshit to forestall objections of the 'but that couldn't possibly be true' type that for instance my father and all my friends always say straight out the box whenever I am getting all spectral.

So now let us forget about all that and just talk about the transporter from Star Trek, like men.

This wor.. (and if it doesn't in fact work this way then I could not care less and I decree the existence of a series called Space Trek that has played in selected Cartesian Theaters where it does work this way)

This works thus:

A complete scan is made of a man, and in the process his body is disassembled, and the information is transmitted, and then at the other end, the information is used to build the man up again. At which point he starts saying 'phasers to stun' and we are done.

What is it like to live in the Star Trek universe?

Well, for one thing, you know that Cartesian Dualism is wrong, and that souls do not exist.

Why? Because Captain Kirk says 'phasers to stun', instead of 'grr.. aargh.. brainszzz.. foam..', which is what he would say if his soul was missing. If adolescent-oriented speculative fiction has taught us anything it has taught us that if you suddenly rip the soul out of a man's body, that is what he will say.

So that leaves us with two forms of property dualism, and materialism, and they all make exactly the same prediction, which is that Captain Kirk gets out the transporter station at the other end and heroically saves the situation.

Now in Star Trek, nobody thinks anything of this. It is obvious why. Transporters have been used over and over again and everyone is used to them and it would just be too horrible to think that they had been ripping peoples' souls away and leaving them as epiphenomenal zombies.

As Yudkowsky has written: "How sure are we of this terrible fact?" "As sure as we can be in the total absence of evidence".

So in the Star Trek universe, I step into the transporter without the slightest worry, and it rips my soul away and I am dead. But there is a new me, and he goes home to my wife and they are both quite uncomplicatedly pleased with that.

Maybe. That's what happens if Property Dualism is true. Ouch.

The Property Dualists can argue amongst themselves whether the new guy is conscious or a zombie, but no-one but him is ever going to know one way or another, and if he is a zombie there isn't even anyone there to know. Even if my wife is Mary the Colour Scientist she is not going to have a clue about any of this.

And actually I think I do Property Dualists a disfavour here. Some of them (Chalmers e.g.) think that everything will still be ok, because not only is the new guy conscious, he is me.

So let us call them Property Dualists of type C, and let us from now on not bother to distinguish them from the Materialists.

Materialism is the default, obvious, scientific, rational form of philosophy where the mind is made of quarks and neurons and stuff and they all just bang around and stuff like consciousness and free will and stuff are a bit puzzling but words like 'emergent' tend to get banged around and everyone pretends not to notice that they deny the central fact of human existence which is impossible to express but that I personally feel very strongly about even though I totally agree that if it wasn't true I would be writing this anyway.

You may guess from this that my sympathies are not far from the Materialist and Property Dualist of type C positions, even though the first seems wrong and the second seems mad.

But I am going to create a new category unifying the wrong and the mad, who seem to make the same predictions even from a subjective experience point of view, and call it R, for right-thinking people.

So if you are a right-thinking person, like everyone in Star Trek is, you do not think it a moral obscenity that Captain Kirk goes home to his wife after a day being transported around.

But imagine now that there is a terrible transporter accident, which results in Captain Kirk's body being disassembled a minute fraction of a second after it would normally be.

If you find that terrifying, make the minute fraction of a second shorter as need be until it is not terrifying.

If you do not find that terrifying, make the interval a bit longer.........

Captain Kirk goes into the transporter, and the scanning process happens, but Spock is still there on the other side of the room, and raises an eyebrow in surprise, and Kirk, realising what is about to happen, shouts 'Stop it!', but it is too late. There is a click, and the disassembly beam comes on, and Captain Kirk is killed.

Later on, Captain Kirk goes home to his wife.

I think at this point I can say vAA!

Killing is hard to define in the Star Trek universe.

Now you tell me what is so different about it from our own dear universe where we have the Sanctity of Life.

Can Murder be Immoral in Star Trek?

I claim independent invention of these ideas, as a little boy watching Star Trek.
As an adolescent I even considered writing them up, but there wasn't Blogger. Anyway apparently they're pretty mainstream philosophy, cause I've just read about them in a paper by David Chalmers, and he references Derek Parfitt who must have been watching Star Trek at about the same time. And I've written them up using the standard language.

Right. A transporter works by scanning someone's body, to find out where all the atoms are and how they are moving, and then transmitting that information to somewhere else, where the body is reconstructed.

And this works by what in Star Trek is called 'magic'.

But can it really be done?

Well, I don't see why it would contradict any fundamental physical laws. It looks like it might be non-trivial in the extreme to actually do it, but how close could we get practically?

Here's a plan. Take a scan of someone's body which shows you where all the cells are (Cells come in discrete types. Surely this is hard, surely this is possible).

Take a stem cell from the someone, and clone and specialize by devious means that cell, until you have enough of each type of specialized cell.

Put those cells back together in the pattern you stored earlier.

And bingo! There may be some technical difficulties, but I reckon they are solvable, and I reckon what you got at the end is a pretty good copy of the original guy, except that probably he feels a bit unwell on account all his hormone balances are wrong, but with any luck this does not kill him and a night's sleep or so and he is cool. And in fact maybe he is better off, but that is for another day.

Now the problem here is that suddenly you have got two guys! Because we've totally forgotten to disassemble the previous one while we were scanning him.

And at this point there are various predictions we can make.

If we are Religious, or we are Cartesian Dualists (or members of an allied trade), then we imagine that the second guy has no soul. And we predict that he falls on the floor and flops around and maybe says 'brainzz...' or something.

If we are Property Dualists of the type that I will call type I, then we imagine that the second guy has no epiphenomenal consciousness. But we do not predict that he will start with the 'soulzzz...', because this is an epiphenomenal difference and so it can make no difference to our physical predictions, so there is absolutely no way we can tell the difference between our position and :

If we are Property Dualists of the type that I will call type II, then we imagine that the second guy does have an epiphenomenal consciousness because one just goes with having a brain. But again, this is an epiphenomenal fact, so it makes no difference at all between our predictions and those of :

The Materialists: The second guy, having an identical brain to the first guy, is fully conscious, fully a person.

So that's nice, because now we have an experimental test that we can do, that will rule out or rule in Cartesian Dualism/Souls.

In fact I will volunteer for it, given that some nice guys somewhere will give me say £250,000 because if the second guy starts going around saying 'I am John Aspden and I am a fully conscious being and it is my clear perception that I used to have the following things', I may well be able to convince him that he is the Second John Aspden, but he is going to find this a lot easier to accept if he has his own narrowboat and some money and so on.

On the other hand, although having an identical twin who knows all my secrets might be a bit of a head-fuck, I imagine it will be nice to have someone who is interested in the same sorts of things to have lunch with from time to time. And I am sure my parents will be pleased, because now I will be at home eight weeks a year instead of just the four.

But if the Souls guys are right, then there is going to be a copy of me rolling around foaming at the mouth and the only option is going to be to beat him to death with a shovel. And I am going to need £250000 of recreational drugs to get over it.

Wednesday, July 25, 2012

Sock Problem

Got it out of a book. Dead Easy. Cat could do it.

A priest has some red socks, some black.

He takes two socks out of the draw. The chances of him getting a red pair is 1/2.

How many socks has he got at least?

What about if he's got an even number of black socks?

Any more?

Hat Problem

Imre Leader via Paul Cook. Two two-hour thinking sessions got me not very far with this, and then the following morning I woke up with the answer fully formed in my mind. I felt a bit robbed. Like I'd been told the answer while asleep. Stick at it. Those who like this sort of thing will like this very much.

There are six people.

For each person, you roll a dice, and write the number on the hat.

They all close their eyes, and you put on each person his hat.

They all open their eyes. Without any communication whatsoever and without looking at their own hat they have to guess the number on their own hat.

If any one of them gets it right then you give them all gold and elsewise you shoot them, or something. You know the score with psychotic maths problems.

Anyway the guys are conferring before the game. What is their plan?

In general choose n, a natural number. n people, n hats, n-sided dice, n pots of gold, n bullets.

An Argument I've seen a Million Times

You got these two things, Alice, and a Banana.

Alice is 'hoopy', but the Banana is 'not hoopy'

But suddenly you are noticing this thing Alana, which is between Alice and the Banana.

And the question is: Is Alana hoopy, or not?

So perhaps you are unwise enough to pick one of these answers. Perhaps you say that Alana is not hoopy.

Aha!, says your interlocutor. Between Alice and Alana is this new thing, which is called Alica.

And at the end of this very long process you are ending up with Albracadabra von Helsingstein and Albracadabra von Helsingstien and only an idiot would claim that one was hoopy and one was not hoopy and so something somewhere must be terribly wrong.

Now I apologize that I am not used the traditional meta-syntactical variables here, like x and y and P and so on and so forth but I think it is maybe cuter with the Banana, and less frightening.

If you want a couple of real examples try (Alice, Banana, Conscious), or (Alice, Rock, Alive), or (Alice, computer simulation of Alice, Morally Significant), (At A, At B, Where) and so on.

If you have maybe spent too much time in maths lectures in your misspent youth you might just say:

Suppose P is a continuous function from a connected space into a discrete space, then P is one-valued.

And this is so obvious that I do not think it is thought of as a theorem. Sorry Dr von Evilfiend if it is really von Evilfiend's Lemma or something.

I am sick of it.

I claim.

From now on it is von Aspdenstein's Argument, and if you want to show that consciousness is not possibly an on-off thing as long as we can move neuron by neuron between Alice and a simulation of Alice then all you have to say is "vAA!", and you are done, and you can move on to discussing exactly where the discontinuity might be, or whether the property might be a function into a connected space like [0,1] or R, or C or a mobius strip and so on.


I never wish to read this argument again, philosophers.

So next time you are proving that there can't only be five social classes or forty five colours or three musketeers because blah blah blah...

If you feel you have to write it out in full then at least put <vAA!> </vAA!> round it so that I can skip efficiently.

Monday, July 23, 2012

Help! I'm Spalding

I think I am about to join a cult.

This ( is the clearest expression so far of the beliefs of this cult. Nothing in it seems surprising or new to me. All of it seems true, except the bit at the end where Luke imagines what a positive singularity might bring, which seems a bit pedestrian and conservative, a bit like an unimaginative Christian's idea of Heaven. And I suspect that Luke knows this perfectly well and is toning it down a bit so as not to scare people.

I believe, as I have mentioned before, that a singularity will occur in the near future, and that its most likely effect is to kill every living human and leave the universe boring, worthless and repetitive.

This belief appears these days to me as well founded as my belief in Fourier Analysis.

Which is to say that I don't understand it intuitively in the same way that I understand addition, but that I can examine every bit of it and see no obvious flaws, and that many of the component parts seem intuitively obvious. I wouldn't be surprised if the details weren't quite what I'd imagined, but I'd bet my life at very poor odds on the general framework.

When I first read about the idea of a paperclip maximizer it immediately struck me as obvious and unarguable and a very real threat.

I filed it in the mental box reserved for sexy doom scenarios which may very well be true but which you can do nothing about, and reacted in my usual way (Global Warming: Say Bollocks to It and Enjoy the Sunshine While You Still Can, etc..).

What I didn't initially believe is the idea that there might actually be something we can do about it.

After a couple of years of thinking about it, and reading the writings of Eliezer Yudkowsky, I'm starting to believe that there might indeed be something we can do about it. That we might be able to turn it to our advantage. To make a God who will act as we would wish a God to act.

And I certainly believe that if we don't, we're doomed. One way or another. We are acquiring more and more of the powers of gods, and seven billion half-witted gods aren't going to be sharing a single world in any great comfort as far as I can imagine.


I have worked not terribly hard at all to build myself a pleasant and enjoyable life in a city I love with friends that I love, and I feel that if I ignore the coming Singularity everything will be great and I can carry on like this for the next thirty years and die confident of having lived a life as happy as any human can ever hope to live. Which was always the plan.

And that, sometime, probably after I'm safely dead, everything will suddenly go completely pear-shaped without very much warning, and everything that I cared about will suddenly cease to be.

To be honest, I am not terribly uncomfortable with that.

But if this 'positive singularity' can be pulled off somehow, then I might end up immortal, and happier than any human can possibly imagine.

So this looks a bit like Pascal's Wager. A very small chance of a very large reward.

The small chance has almost no dependence on my actions, and certainly no dependence on whether I 'have faith' or anything silly like that. So I could just carry on as is and reap the vast rewards anyway, if they're there for the reaping.


It occurs to me that I am being underconfident, both in my beliefs and in my abilities. Maybe there is something I can do to change the probability. One obvious thing I could do would be to work a bit harder and donate the extra money to the Singularity Institute.

A minute change to a tiny probability of a vast reward. Paid for by using time that I'd usually spend reading and thinking and watching films in some ghastly office working for venal idiots and giving the money away.

I never give money to charity. I tried occasionally in my youth, but I found that the charities respond to this by sending you vast amounts of disturbing literature about starving people and horrible diseases and endless numbers of emotionally affecting pictures of suffering animals and it had the opposite effect on me to that which was doubtless intended.

The Singularity Institute hasn't done this. It has confined itself to creating large quantities of entertaining philosophical argument and leaving it around where I can find it. For the sheer pleasure of reading Eliezer's philosophy I owe it something.


But Jesus! guys, I know what I'm experiencing here.

This is a religious conversion, pure and simple. This is what the founders of the Jehovah's witnesses must have been thinking, when they discovered the one true way to read the Bible. This is what the latter-day saints and the calvinists and the fucking scientologists for fuck's sake must feel like as their pathetic brains fall for the lame arguments of con-men who have found a clever way to extract money and power from bunches of bloody fools by explaining the mysteries of the universe to them in a way that they can actually "understand".

I don't fall for this crap! I take nothing and no-one seriously (except perhaps myself), and I can feel my natural contrariness and scepticism calling sadly to me as I contemplate jumping off the cliff.

Once I'm gone, I'm gone. Once I'm publically committed to this foolishness, I'm going to turn into a scary swivel-eyed fanatic who can't listen to counter-argument and won't accept that he's wrong out of sheer terror of looking like an idiot and admitting that he's thrown away his life in service to an idea that is just a bit stupid.

And the only thing I can think of is Doctor Who. I can't remember the episode.

The Doctor needs some keys, or something. They are locked in a safe. And there are Daleks coming, or something. And there is this guy, who is a decent and honourable man, who has sworn not to give the keys to anyone under any circumstances.

So he is understandably a little reluctant to give the keys to the Doctor. And the Doctor says 'And when you are standing on your burnt-out world in the shattered remains of your civilization, at least you will know that your personal honour remains intact.'

That seems a powerful argument to me. I think I am brave enough to look like a fool in front of myself if it might save the world.

And I really think it might.

Please help.

I need counterarguments. Read Luke's lovely clear summary of what I have come to believe and tell me whether it's just a load of horseshit for some easy reason I have missed.

Since I need counter-arguments, I am going to try and come up with some myself.

I have gone into King's College and I have hugged my favourite tree, that was my friend when I was an undergraduate, and I have asked it what I should do. And it responded without hesitation "You know what you should do. Yudkowsky is possibly right and no-one else even seems to care about the problem. Most people making counter-arguments are just obviously wrong."

Well. I am not enormously interested in the opinions of vegetation per se, but that lets me know that my unconscious mind has already gone over to the enemy.

Try again.

The Singularity. The Rapture of the Nerds. Eliezer Yudkowsky as the Messiah. Immortality. The End is Nigh. Give us Money. How much more pattern-matching to a bloody religion does a man need!?? Run away. Religions are bad memes that use minds to adapt to infect minds and clever sceptical people fall for them all the bloody time and what makes you think you are special? You have been predicting for ages that religion will evolve round humanity's new sceptical defences until it is capable of infecting any reasonable man again. You expected it to take longer, but maybe this is what is happening.

Nothing about me is special. That's what I'm scared is happening. Maybe if I fall for it I can actually help to make it even more convincing. I always thought I'd be a good priest if I actually believed in anything.

And yet. And yet. What if the Witnesses had been right? What sort of bloody fool would have ignored the evidence of his own mind and damned himself if they'd been right? I have to decide what the truth is, and I can only use my own mind to do it. And my mind is rubbish. That is the whole problem.

So maybe I should wait. Wait until the religious feelings have died down a bit. See how I feel in a year's time.

If I feel like this, there must be many other people who feel like this. Maybe once a few people give significant amounts of money to the SI there will be an avalanche of money going their way.

They seem like nice guys. If they win they'll save me anyway. I don't have to do anything.

Maybe I should bung them £1000 or so and be public about it, in the hope that that will maybe contribute to the avalanche beginning. I've been known to spend more than that on opera tickets, so it's not going to make me look that foolish if it turns out that SI are just a load of blowhards.

Trust your own mind John. It doesn't usually fail you in embarrassing ways. And it's not like you really hate embarrassment that much anyway. What the hell are you worrying about? Why is your whole brain full of flashing alarms and ringing bells?

There needs to be a name for this emotion. Conversion-terror or something. Perhaps we could play Liff with it and pick the name of a nearby village. Spalding.

I am in a state of terrible spalding. Please help. If you have counterarguments to singularity-nonsense that I haven't heard I need to hear them before I turn into a full-blown raving religious idiot.

Thursday, July 12, 2012

Cox-Zucker Machine

Bets are invited on whether this is a real thing:

Thursday, July 5, 2012

Shangri La Experiment Prior Beliefs and Biases

As a good Bayesian, I should work out what my priors and predictions are. 

As a good sceptic, I should work out what I want to be true so that I can be suspicious of it.


Shangri-La: Appetite will collapse. Weight will drop significantly. Over a month belt notch will move to 3 or 4.

Willpower: Appetite won't change. Weight will increase. Belt will move to last notch and probably become unwearable in the evening.

Helplessness: Appetite will decline very slightly to compensate for extra calories (probably unnoticeably). Weight will stay pretty much the same. Belt notch will stay as it is.

Call these S,W,H for short

Current beliefs

S: fad diet. lots of people think it works, but anecdote only. You get the same evidence for chiropractic, aromatherapy, homeopathy, prayer, acupuncture, etc, etc.

No rational reason that this should be true except that it's got a story that sounds good to me.

Give it a 1% chance. There's a stack of crap in the world, and this is just some that happens to appeal to me.

W: Very much the standard model. It's obviously and uncontroversially true that weight loss = calories in - calories out. The bit I don't like is that 'eat less and exercise more' doesn't seem to be good advice. The world is full of skinny people eating whatever they like and hopeless fatties trying to live on lettuce and rice and making themselves completely miserable. It's probably true that if you eat less and exercise more you'll lose weight, I just don't think it's humanly possible to defy your basic drives by exercise of willpower.

Nevertheless, this is what most people believe, and that gives it respect. I'll give it a half chance of being the truth. (i.e. I notice my confusion and make no prediction)

I also notice in passing that I dislike this model because it means that all the smug types who think that fatties deserve it and that anorexics should just stop starving themselves to death for no reason are right.

And I also note that I dislike the 'soft sciences' and medicine because even though their subject is very hard they pretend to scientific infallibility as if they were physicists. So I love it when they're wrong even though I'd like them to find out the truth (and I accept that they are trying!)

So I'm really badly biased against this model, and I should watch myself for evidence that I'm frigging my own experiment in order to attack it.

H: I hate this model but it looks more true. It says that your weight has a set point that you can do almost nothing about. Fatties stay fat, the scrawny stay feeble. By immense exercise of will you can change yourself slightly but once you stop with the willpower, you'll go back to normal.

This is what I think is true and I've got personal and anecdotal evidence that it is. I'll give it a prior of 90% truth. I'd be surprised if it was wrong, but not nearly surprised enough to disbelieve the evidence.

I notice that I'm baised towards this model because it's mine and I've advocated it publically. I'm also biased against it because I don't like helplessness. I'd prefer either of the others to be the truth.

Immediately I notice that my priors add up to 141% (so what the hell is my brain thinking).

I'll assume that there are only three possibilities and normalize (roughly) to:



I also note that I believe the following things, and that they make no sense in terms of model W, but that none of them are relevant to this test so I'm ignoring them:

Smoking moves your set point downwards. Unnaturally fast carbohydrate foods (rice, wheat, sugar, potatoes, etc) screw with the system and make you more hungry than you should be, leading to steady weight gain and eventually morbid obesity. The Lord alone knows what is going on with anorexia but it's probably something to do with long term voluntary starvation somehow moving your set point unhealthily low.

Shangri-La Diet Experiment

I'm going to try to run a fad-diet experiment on myself. Here's the protocol:


For the whole of July, when I first wake up every morning, I'm going to drink two tablespoons of Sainsbury's Mild Olive Oil. As far as I can tell, this stuff is absolutely tasteless. The only way you can tell you've swallowed it is that your teeth feel oily.

After that, for one hour, I'm not going to let anything that tastes of anything into my mouth. Not tea, not toothpaste. Water is ok, but nothing with flavour. It strikes me that the easiest way to manage this will just be to go to sleep for an extra hour, but if I don't feel sleepy then I'll just read a book or go for a walk or something.

That's it. After that hour is over I'll just completely forget about it, and eat what I like when I'm hungry, and do as much exercise as I feel like.


I have three models for what will happen, and I'm trying to distinguish between them:

Shangri-La: Seth Roberts is right. My appetite should lessen dramatically, and as a result of eating less my weight should drop by a considerable amount. Some advocates have been reporting 2-3lbs a week. That's a vast difference, a stone over a month,  and should be easily noticeable. I'd imagine that my favourite place for a belt will move a couple of notches.

Willpower: The 'standard model' weight change = calories in - calories out. Notable both for its obvious thermodynamic truth, and for the persistent hopeless failure of its (naive) prescription ( try to eat less and do more exercise to lose weight ) over many years. I'll be consuming around 300 kilocalories a day extra, and so after 30 days I should have 9000 extra calories of fat, which is about 3lbs. That should be noticeable. 

Helplessness: My own theory is that a set point weight exists for each person, and that any exercise you do will be compensated for by increased appetite. (Fast carbs can screw up this system and cause obesity, and it appears that smoking can screw it up and cause leanness.), and that the extra 300 kcal should be precisely balanced by a loss of appetite at other times, and at the end of the month there should be no difference at all.

There's obviously also the confounding factor that I'm probably still adjusting to semi-giving up smoking. I can't say whether that will currently be moving my weight up or down, but let's just assume that after six months the effect either way will be small.

Current State

At the moment I don't know my weight and I don't care either. Within reason, muscles are good, fat is bad, and they both weigh something. What I care about is the increasing spare tyre round my waist.

My waist measurement is currently a whopping 37". In the morning, my belt feels comfortable on its second notch. On the first notch it doesn't hold my trousers up. On the third it feels a bit tight and I get the urge to loosen it when I sit down. As the day wears on, I tend to need to loosen it a bit.

Wednesday, July 4, 2012

Shangri-La Diet

Five years ago now I gave up rowing, ballooned horribly in weight, and started reading up about diets.

As a result of reading various plausible-sounding modern theories about human metabolism, I changed what I ate, cutting down the quantities of various fast carbohydrates like bread, pasta, potatoes, and rice. I also stopped drinking fruit juice and tried to eat more fruit and vegetables.

This worked a treat. In about six months I was no longer overweight. As an unexpected bonus my lifelong tendency to short periods of depression disappeared, which was a greater benefit than the weight loss.

And my weight has remained stable, and my black moods gone, for the last five years.

I've completely stopped worrying about my weight and no longer track it or think about what I eat. (Although it is still true that what I eat is largely what I learned to like while weaning myself off the fast carbs, so I'm probably eating much as I would be if I was thinking about it. I would say my diet these days is probably mostly fry-ups and fruit. There are also significant calories from alcohol.)

Just before last Christmas, I remember noticing that the belt I have worn since I was a student was now on its tightest setting, and I was thinking about making another hole. I was fitting easily into all my clothes, including some I'd had as a student, and the various 36" trousers I got hold of in the post-rowing ballooning period felt very baggy and wouldn't stay up without a belt. I was actually starting to wonder about going to the gym to keep my muscle mass up. (If I had to choose, I'd be fat rather than skinny. Luckily there is a nice broad happy medium between.)

When I went back home to Yorkshire for Christmas, my mother did her usual trick of filling the environment with unlimited supplies of delicious and entirely evil and unnatural foods (which is her way of showing me that she loves me, and although I wish she wouldn't, there is simply nothing to be done if I want to stay friends with her), and over the two weeks of the holiday I went up an entire belt notch. This happens every time I go and stay with them, and I figured the new weight would come off again after a couple of months like it usually does.

But it hasn't done. In fact I've been getting fatter and fatter, and I'm now having trouble fitting into my clothes again.

Just this morning I was wondering what had changed, when I remembered that (after a series of gigantic benders that had left me feeling disgusted with myself and worried about alcoholism) I'd pretty much given up smoking and drinking for the first three months of the year. And although I've taken both up again now, my smoking is more 'a couple of cigars on a Friday night', than a regular habit like it used to be.

And of course smoking is traditionally associated with weight loss. Apparently lots of young girls take it up specifically because it's an appetite suppressant.

Reading up on it, there's apparently some evidence that nicotine lowers the body's set-point, which is supposed by analogy to be the dial on the thermostat of weight.

Apparently if I keep on smoking at my new low rate (which I intend to), the set point will eventually make its way back to something sensible and the weight will go away.

But I wonder if I can cheat.

A guy called Seth Roberts invented a few years ago something called the 'Shangri-La Diet'.

It sounds so much like a fad that if I noticed it at all I dismissed it entirely in my earlier search. My attention has recently been drawn to it by the fact that several people whose intellects I trust appear to take it seriously.

There's lots of anecdotal evidence that it works, some experiments that show that some of the proposed mechanisms in it work in rats, one theory that it works by a mechanism completely different to the one supposed by its inventor, and unless I've missed some, no controlled studies whatsoever to find out whether or not it actually works on human beings. I imagine that it sounds so weird that nobody in nutrition wants to waste their time proving that it doesn't.

So I'm going to try it on myself. It's completely mental and involves eating oil. Oil is really high calorie. My curiosity is roused. It should be easy enough to tell if it works or not!