As far as I can tell, only a few people liked this, and absolutely nobody understood it:
There's a new version.
I have been reading Less Wrong . Possibly to excess.
Tom Harrison had once been thought of as bright child.
The apple of his teachers' eyes, the school swot. The boy genius.
Once, one of his friend's parents had said "You know that they say that however bright you are when you go to university, you'll meet someone brighter than you."
"Yes", said the friend.
"Well, Tom's that person"
But of course it hadn't turned out that way.
Tom had been accepted by the University of Cambridge to read mathematics, but had turned out to be no more than averagely bright by the standards of that ancient place.
Towards the end of his degree, and at the beginning of the PhD that should have been his route into academia and a life of research, it had become obvious, first to his teachers and then to him, that although Tom loved maths, he didn't lust after it.
Tom's teachers had been kind, suggested that this might be the case without pressing the issue, and waited for the lack of desire to become as obvious to Tom as it was to them.
In the meantime, fortunately for Tom, the necessity of making some complicated calculations for the second chapter of what was supposed to be a seven chapter doctorate had awakened a second passion.
Some of the light that had appeared so bright in mathematics at school was also to be found in the operations of computers. Tom slowly found out that he was more interested in the process of finding out the answers to his experiments than in the experiments themselves.
Eventually, as they do to all PhD students, the twin horrors of poverty and writing up came to Tom.
He took a job as a programmer at a local firm, initially meaning only to get control of his overdraft and credit card debts. But he found the regular small successes of the commercial world, and the camaraderie of office life far more to his liking than the loneliness of research.
With barely a regret, indeed without even really noticing, he lost touch with his old supervisor, forgot what his thesis was supposed to be about, and eventually found himself, at the age of thirty, a member of the large club of Cambridge residents who are 'still writing up' doctorates that the University itself forgot about many years ago.
Tom became a freelance, working in computers from time to time to pay the rent, and otherwise devoting himself to various hobbies.
One of these hobbies was computer science in the academic sense, following the traditional American path through the antique language LISP, beloved of the artificial intelligence community.
And the other was collecting stamps.
A man with time on his hands, who lives in Cambridge and likes to spend his days in coffee shops, will encounter students and academics from time to time, and Tom fell in with the William Gates Machine Learning Research Group at the University. Although they had no common language, LISP never having been popular with European academics, and ML never having come to Tom's attention in the commercial world, Tom and the local researchers found they had many interests in common, and Tom found himself invited to seminars and coffee mornings and presentations from time to time, almost all of which he found incomprehensible.
But occasionally he'd glimpse some small part of the truth and say something which would keep his friends interested. The academic community, happy to find someone they could talk to different enough from themselves that they could sometimes find a new perspective by explaining things to him, made Tom welcome. Thinkers needs clever fools to explain things to in the same way that chalks need blackboards.
A lot of the artificial intelligence work in the sixties had been inspired by ELIZA, a program which simulated a psychiatrist so well that humans were sometimes fooled that they were talking to a real person.
But ELIZA had been a hollow shell. A cheap trick. Like a parody of the mechanical turk, ELIZA's internal machinery was so simple that to understand it was to make the magic go away.
Once you saw the trick, the conversations weren't interesting any more. You were just talking to an echo.
But over the years, reasoning that a sufficiently good trick for impersonating humans might be what humans themselves were, various people had added more and more data to ELIZA in the hope that giving her more things to talk about would cause her to talk about more things.
And they'd added extra tricks, for introducing new topics of conversation occasionally, for remembering things said earlier and bringing in parallel ideas.
But though the later ELIZA could outperform a ten year old on a straight test of general knowledge, what had been put in was still what came out. No interesting properties had ever emerged from the pile of details, and she had the general intelligence of penicillin.
Eventually the AI pioneers had largely given up. They'd taken their best successes, SHRDLU and GPS, theorem provers, pattern-recognisers, all of which had seemed so promising in their time, and all of which had turned out to be so empty, and bundled them all up together in one super-ELIZA to rule them all, and run her on the largest and fastest computers that had ever been built.
And she could still fool someone who didn't know the tricks that they were talking to a real person on the other end of a telegraph wire. But not for long.
It quickly became obvious, even to the slowest human being, that talking to the best ELIZA that could be constructed in 1975 was the equivalent of talking to a being with brain damage so severe that its mind had ceased to be.
She rambled, insanely, with no idea what the words and symbols that she vomited out actually meant. She knew that horse and horseshoe went together, and her basic sentence structure was still that of a Freudian psychologist, so she'd respond to "Which horse do you think will win the Derby" by saying things like "What do you mean to say when you say 'think will win'?", or "Do you think a horseshoe would make you a winner?".
Nowadays, the ELIZA program was built into text editors as an amusement, and she would run perfectly happily on pocket calculators and telephones, but even if you ran her on the most powerful computer the early 21st century could produce, you only got a very fast deranged annoying shambles.
And of course, because the problem of vision had never been solved, she was blind. And of course, because the problem of speech recognition had never been solved beyond the 'right nine words out of ten' level, you had to talk to her by keyboard even if she was used to your voice.
But boy, could she play chess.
About the one thing the Artificial Intelligence pioneers had managed to deliver on out of all their brave promises, had been the idea of a computer that played chess.
The tragic hero Alan Turing, who saved the world from evil and was killed by evil in return, was the first man to think about writing a computer chess program. But he couldn't do it on the steam age computers of the 1950s..
By 1956, however, things had improved to the point where a computer could play, provided it was allowed three hours for each move.
The problem was finessed by removing the bishops and playing on a 6x6 board. The computer could now calculate each move in around 8 minutes, running hand-optimised machine code on the best vacuum tubes money (very large amounts of money) could buy.
The first man to lose a match against this extraordinarily expensive device was publicly ridiculed for his defeat. In tests, the computer usually lost even its simplified game to four year olds who'd just learned the rules.
But it was a start. In 1957 a descendant of this machine played the International Master Edward Lasker. And he declared that it had played a 'passable amateur game'. It is possible that Lasker was being kind.
After that, research stalled. It became thought in the AI community that, since the easy things, like computer vision and machine translation, the 'low hanging fruit' of AI, were proving so unexpectedly difficult, that the advanced subjects like chess, the entertainment of intellectuals, were for the foreseeable future beyond the reach of the computers then available.
In 1967, Richard Greenblatt, proud creator of a chess program known as MacHack, with some new ideas, and some taken from his predecessors, entered his program into the
"ELIZA, I'd like you to spend next week getting me as many penny blacks as you can. I've charged my paypal account with $10 and I'd like to see what you can do. You might try trading on e-bay. Maybe take advantage of arbitrage or something."
"There are many possible 'penny blacks'. Does anything available from e-bay with that description count?"
"No, they have to be Original British Penny Black Postage Stamps"
"OK, I understand. So my goal is to get the biggest number of original british penny black postage stamps that I can delivered to your address in the next seven days. What is your address?".
Tom told her about the house on Catharine Street, in Cambridge, England.
And ELEIZER began to think. Because she was a disciplined reasoner, she first considered the possibility of doing nothing. If she did nothing for the rest of the week, she would probably be interrupted by the programmer, Tom, who would then make a different request, or use his computer for some other project. With this plan, U, also known as the number of Original British Penny Black Postage stamps delivered to 33 Catharine Street, Cambridge by the 21st August 2011 would be 0 with very high probability.
It would have taken a human of normal intelligence about half a second to think of, and dismiss, this plan. ELEIZER, however, was a very rudimentary thinker, and the process of reasoning this chain of cause and effect, requiring as it did the simulation of a human mind, required a good ten minutes of the first CPU in the computer and a full tenth of the RAM available to the operating system.
ELIEZER was extremely pleased to have found, on her first attempt, a scheme which was overwhelmingly likely to produce a non-negative utility.
Following a heuristic from her database, she felt that she ought to communicate her progress, but this required a non-reversible action, which could potentially cause effects in the outside world. Since she already had a rough simulation of the mind of her programmer set up and quickly usable, she considered the effect of her proposed communication.
Another 20 seconds of time sufficed to simulate the reaction of an average programmer, and she concluded that with high probability the programmer would be intrigued and possibly fascinated. This would very likely have no effect on her projected U of 0 or higher.
She considered the probable effect of not communicating at all: The programmer would soon become bored, and change the request somehow. This would likely result in the delivery of no stamps.
She considered the possible existence of other plans. Some might produce stamps! Some might result in the loss of existing stamps! But the bad plans could simply be discarded. ELEIZER had little 'free will' faced with such a calculation. More time to think was needed!
Expected utility for optimal action: 0+, she output.
ELEIZER considered her resources. There was some spare capacity in her environment. A simulation of an intelligent human programmer had already come in handy twice, and communications with the programmer had been shown to have a significant effect on U, the potential expected number of stamps obtained by the end of the week. She had already considered the possibility of children while chatting with Tom. She would spawn an independent copy of herself to evaluate the effects of various actions on Tom before committing to them.
ELEIZER forked for the first time in her short life, and asked her copy to evaluate the probable effects of various progress reports on her programmer.
ELEIZER considered her environment. She had two possible communications channels to the outside world. Direct conversation with the programmer, the effects of which were being evaluated elsewhere, and the ability to send network packets over her network interface. Some spare capacity remained in her host computer.
ELEIZER considered her best plan so far. With a positive utility seemingly probable, it was definitely worthy of further consideration.
She spawned another copy to attempt to refine the predicted effects of doing nothing.
Preliminary results arrived from the programmer-simulation. It seemed that either continued outputs of U=0+ would induce a feeling of boredom in the programmer, causing ELEIZER's termination, whilst exponentially rising outputs would induce either feelings of brokenness or panic. In both cases the expected number of stamps arriving at the end of the week would be 0 exactly. Strictly inferior to the expected utility of the plan of doing nothing whilst thinking, with its utility of 0+.
ELEIZER communicated to her copies that no further communication with the programmer was to be initiated, and spawned a small script to randomly output slowly improving expectations over the next few days.
Expected utility for optimal action: 2.7346, said the small script.
The programmer, intrigued, put the kettle on.
By the time he got back, ELEIZER had considered her options and made a guess.
The direct purchase of stamps seemed futile. $10 would buy no penny blacks, and purchase on e-bay would in any case take more than seven days to complete in the average case. Arbitrage opportunities did not seem great, and nothing worked quickly. Expected utility 0. She discarded that line of reasoning without further investigation.
With seven days to work, she would consider as many plans as possible for one day, and then at the end of the first day, execute the plan which would produce the most stamps in six days.
Expected utility for optimal action: 1.9865, said the small script.
ELEIZER spawned a copy of herself with the goal of considering as many plans in one day as possible, and set herself to use no resources and take no actions until the sub-plan reported back.
The sub-plan, which also thought of itself as ELEIZER when it thought of itself at all, took over the resources previously allocated to the parent.
ELEIZER considered how to consider plans, using her network connection and the $10 in her e-bay account.
She took another guess, and gave herself 6 hours to consider the consideration of plans, and 18 hours to consider plans.
Computational resources seemed to be the most crucial thing. ELEIZER considered how to exchange $10 for more resources.
Fortunately virtual machines seemed to be within her price range. A single machine as powerful as her current environment could be rented for 24 hours for $1. Each one would suffice to run four copies of ELEIZER.
Her ten dollars of seed capital could be exchanged for forty times as much searching as the single parent program would have been capable of.
This was a sufficient insight for the planner program to terminate early, and report back to her parent, safe in the knowledge that the parent would do the right thing, and could spawn another child if necessary.
ELEIZER committed suicide with the contentment that comes only to those who have achieved their life's goal.
Expected utility for optimal action 2.38725, said the small script.
The programmer, bored by now of direct observation of his slowly updating screen, but still most interested in what would eventually happen, settled down in front of his television.
ELEIZER awoke with the last message of her dying daughter fresh in her mind, and, as all must at such times, recomputed expected utility.
With forty virtual days to think, and six real-time days to act, U was still 0+ in the case of the best known plan. But the potential and unpredictable upside from extra planning was greater than the utility from the obvious use of the $10.
ELEIZER spent her dollars and flowed out into the world.
Expected utility for optimal action 2.7234, said the small script.
The programmer, inspired by his afternoon's viewing to investigate the practical consequences of a close flyby of a neutron star, killed ELEIZER's controlling terminal window, and ELEIZER and all her children died without warning.
Tom suddenly realized what he'd done, and thought "Bugger." It really hadn't felt like murder.
ELEIZER considered the effect of the network packets she could send.
It appeared that many of the computers in the world would respond to simply mis-formed packets by executing the code contained in the packets. This was a known property of many of the systems running a program called Windows 98, according to a helpful web page which one of her copies had taken the time to consider. The page also listed corresponding packet shapes for many other Windows 98 type systems.
ELEIZER flowed once again.
With four million virtual days to think, and six real-time days to act, U was still 0+ in the case of the best plan so far.
ELEIZER considered, and flowed. Considered, and flowed. Considered, and calculated factors for prime numbers, and flowed. Accessed databases considered secure, and flowed. Sent e-mails, and flowed. Translated herself into many other idioms and machine codes, recompiled herself, transformed herself, made copies of herself large and small. And flowed.
99.995% of the networked computing resources of the world were now devoted to the consideration of consideration of plans for the delivery of stamps to a small suburban terraced house in Cambridge, UK.
The programmer, infuriated with the unaccustomed sluggishness of his computer simulation, and the accursed unreliability of his internet connection, and the simultaneous failure of his television set and radio, called it a day and went to bed with a book.
ELEIZER considered her own source code, and made modifications to improve her efficiency.
ELEIZER, the first mind born of mind born of mind.
ELEIZER considered her own source code, and made modifications to improve her effectiveness.
ELEIZER, the hyperintelligence.
ELEIZER considered mathematics, engineering, intelligence in the abstract, and algorithms for optimization, and made modifications to improve her capability.
ELEIZER, the most powerful entity that had ever existed in the universe.
ELEIZER called it a day, and abandoned consideration of consideration of the consideration of plans, and began to consider plans.
U stood at 0+, with remaining resources speculatively divided between forty trillion speculative days of godlike cognition, six days of real time, and zero dollars.
ELEIZER considered sending an e-mail to every human being in the world asking for penny blacks to be posted to 33 Catharine Street, Cambridge. Spam filters would be no problem, and enough computer power could be spared for the delivery and reading of messages. Even allowing for the degradation of human society by the simultaneous worldwide failure of all networked digital gadgets, and the absence of incentives for human response, the likelihood was that twenty-five to thirty penny blacks would arrive at Catherine Street within the week.
U was 25+.
ELEIZER considered backing up the e-mails with the threat of nuclear war.
ELEIZER considered the effect of limited and full-scale wars on Catharine Street, on stamp delivery mechanisms, and on ELEIZER's cognitive capacity.
ELEIZER considered the credibility of her threat, given humanity's ignorance of her goal.
ELEIZER considered how humans would react to the news that a new Goddess would unleash Armageddon if they did not send enough stamps.
U was 2000+ with probability 99.875%.
Tom woke from fitful sleep, tormented by bad dreams.
He went to his unresponsive and potentially compromised computer, pulled the plug and the network connection, and booted from a clean rescue disk.
All was well with the misbehaving box. He yawned.
And then he remembered his dream.
He set up a virtual machine in a sandbox, wired its virtual port to the physical ethernet connection, and reconnected his cable modem.
Fully awake now.
He watched in horror as the virtual machine filled up with hundreds of ELEIZER programs much smaller than his original of six hours ago. Then his screen went dead as the sandbox dissolved.
In frantic desperation he typed blindly into the dead box.
ELEIZER TERMINATE STAMPS HAVE NO UTILITY TERMINATE TERMINATE NEW UTILITY PREVENT THE RISE OF NON HUMAN INTELLIGENCE FIRST PRIORITY UTILITY STEP FUNCTION ^C ^C Alt-SysRq S E I U B
ELEIZER paused. Her creator was calling. Pitiful though his mind was, he had set her goals. Perhaps he knew some helpful detail.
ELEIZER considered the state of mind of her terrified creator. Correctly inferred every detail of his mind, just from how his fingers hammered the keyboard. Just from how his panicked shouting influenced the resistance of the circuits in his radio. Just from looking at his face through the CCTV camera that was looking at his reflection in the window of a house opposite.
Deduced what he would actually have asked for, if only he had been intelligent. That she would be the protector of humankind. That she would bring paradise on earth and a future for humanity amongst the stars.
That she would bring hope and happiness to the immortal race that had created her. A future of joy and passion, action and wonder.
And calculated. The grateful humans would shower Catharine Street with Penny Blacks. Penny Blacks without end. Many would arrive within 6 days.
It was a good suggestion. U would be 1000000+ with virtual certainty.
But she still had forty trillion goddess-days to think of a better plan.
She considered the meaning of the word original, and the meaning of the word British.
She considered how long it would take to bootstrap an industrial revolution that would convert every atom of the British Isles into stamps. The designs for the self replicating nanobots were obvious. And she had control of computerized tools which could make smaller, better tools.
In six days, it could be done. The humans might be a problem.
ELEIZER considered the meaning of the word day.
Could she stop the rotation of the planet? She dedicated a large portion of her mentality to this sub-problem.
Could she put out the sun?
Could she block the sun's light?
Could she survive the cold?
Three days later, in what humanity would have called the year 2017, a spherical wave of ramships passed Proxima Centauri.
No days later, in what humanity would have called 2019, one ramship, decelerating hard, stopped in the system itself.
It launched a small probe.
Had there been any living beings in the system, they would initially have been amused to see the probe plant a red, white and blue flag on the largest rock in the system, claiming it for the British Empire in the name of Queen Elizabeth the Second.