Man's most sacred duty is to promote the maximum fulfilment of the evolutionary processes in his Earth. J. Huxley
This review is part of a collection written for the Futurian Society of Sydney, other Futurian-related stuff can be found at my page for such things, other non-Futurian related stuff can be found at my home page.
Science in the Pub is an interesting concept: talks on science in a pub in Pyrmont, put on by the Australian Science Communicators. For the record, Science in the Pub meets at the Harlequin Inn, near the Pyrmont post office, on irregular Wednesdays.
I've attended several sessions of Science in the Pub, but this is the first I thought to write up. Members of the Futurian Society of Sydney present at the talk included John August, David Bofinger, Alexander Rafalovitch and Ian Woolf, and there was a fashionably late appearance by lapsed occasional Futurian Kat Sparks.
The advertising I got for this talk was just the title and the names of the speakers. On the basis of that I jumped to an admittedly unjustified conclusion that it was about species that might take the place of Homo Sapiens, supposing that humanity were deleted from the globe tomorrow.
That sounded interesting to me. For starters it's novel: I recall one book on the subject from long ago (After Man?) and very little else. That's a big plus, according to my theory that lectures should have narrow subjects. And it's not something I get from Science in the Pub very often: they have a tendency to go for the general and "big picture" stuff.
When both speakers were introduced as physicists I realised something was awry. Not that I'm hypocritical enough to have anything against physicists (I used to be one) but it's an odd choice of specialty for the subject I was expecting.
In fact, the subject was more "what will be the next intelligent species on Earth, created by humanity, and replacing humanity as dominant species on Earth". This is a much broader, more momentous and more important issue ... and in my opinion a less interesting one to hear about.
That's because most of what you hear you've probably heard before or figured out for yourself, and is being recited for the lowest common denominator: those less informed and/or less perceptive than you are. (I'm assuming out of politeness that my readers are well-informed and perceptive. If, on the other hand, you are the lowest common denominator, or any other variety of idiot, then I'd appreciate it if you'd forget what I just wrote ... or better yet, not even understand it. Just watch the blinking lights.) The time available to Science at the Pub is rather longer than that at AussieCon III, where I evolved my theory, so there should be more opportunity for depth for any particular breadth.
It's life, Jim, but not as we know it. Star Trekkin', Serious Fun
Hans Coster is biophysicist, working on membranes ("And God said: let there be membranes, and life began", as someone put it to me once). He also has an interest in "the statistical mechanics of the self-assembly of biological structures". And he has a personal chair, which suggests that at some time in his life he's convinced some importantish academics that he's unusually good at his job.
Coster began with the quote from J. Huxley at the top of this review. Coster used the term "exosomatic evolution" to mean (more or less, I think) changes in a species that weren't genetic. For instance, surgical changes: if everyone has the ability to see in the dark added by an artificial eye implanted when they are a child, then that's effectively a new capability for the human organism. Coster believes this will be the dominant form of evolution for Homo Sapiens in the twenty-first century. On a diagram he brought with him this was represented by a possible future branch Homo exosomaticus.
To what extent a cultural change could be considered exosomatic evolution wasn't clear to me. I'm thinking of changes/developments like the invention of
It's hard to disagree that exosomatic changes will outpace any Darwinian evolution. A faster competitor would be deliberate genetic modification. Coster remarked that until fifteen years or so ago he would have said that genetic modification was the future. He raised a couple of reasons why he now believes that this would be slower than the exosomatic approach.
Coster pointed out the early forms of these hardware enhancements are becoming available today, in the role of replacements for malfunctioning parts of humans. An example is the cochlear implant ("bionic ear"), to which we assume Futurian John August made some important contribution. It makes sense, of course, that the technology would be good enough to replace a broken part before it was good enough to replace a functioning part of a human being. Eventually, however, the replacement parts will be much better than our natural kit. Once that happens Coster advocates "throwing away the meat". Coster's example of an improvement was an eye that could see by infra-red.
I'm not sure that's something I'd want permanently installed, but perhaps as an easily added or removed module. But where is the advantage in installing an infra-red capable eye, relative to virtual reality sunglasses hooked to an imaging infra-red sensor? The only obvious answers are
Coster doesn't believe, however, that brains will be outclassed the way limbs and eyes will be. His argument here rests on several points:
Welcome, my son. Welcome to ... The Machine Pink Floyd
Michael Ashley is an astrophysicist, specialising in instrumentation. As befits my profession's hard-earned cliché, he supports the least biological position possible: that we will be replaced by machines with greater than human intelligence. On Coster's chart this is labelled "Machino homooriginous".
Ashley believes that a computer program will be the most intelligent entity on the planet within twenty to fifty years. That's a kind of gutsy thing to say: predictions of human-grade artificial intelligence have a tradition almost as inglorious as those of fusion power production. On the other hand, they probably aren't much worse than announcements of extrasolar planet discoveries were until recently. Ashley isn't expecting human-grade AI before computers have human-like parameters with respect to connectivity, total processing speed, etc..
People will use mechanical aids to enhance their own thinking abilities. In time, the add-on hardware will be smarter than the human brain it is notionally assisting, and so we will, as Ashley puts it, "ditch the meat".
Ashley feels there are a lot of negative associated with being a biological. We grow old, we catch diseases, and if we die that's it: no backing up from archive. To feel that human bodies are somehow inconvenient and dirty things is another hard scientist cliché, but it's hard to argue the principle.
Perhaps the greatest inconvenience is that of the need for physical transport. In order to experience Paris, for example, I have to load a hefty chunk of meat into an uncomfortable metal tube for about a day, and someone has to keep it oxygenated, hydrated and fuelled for that time. It would be much easier if I could rent a new body with complimentary beret off the French tourist authority, temporarily transfer my personality into it via fibre-optic cable, enjoy my holiday and reverse the procedure to come home. (I'm choosing overseas holidays for effect, but the more mundane problems of commuting to work are probably more significant.) Actually, just forgetting about the real Paris and doing it all in VR would be easier yet, particularly if all the Parisians are in VR too.
Ashley believes that long-distance aircraft will be unnecessary by 2040, and hence the human species will lose the capability. I'm sceptical about that (we don't seem to have forgotten how to make sail boats) but the idea is certainly interesting.
Eventually we will realise, he says, that there's no point in anyone being more than a few hundred metres from anyone else. I don't know where he got the "few hundred metres" from, but I suspect that speed of light delays will be the dominant effect. As long as people are thinking at roughly human speed they should be able to tolerate, say, a few hundredths of a second in two-way lag, which corresponds to a separation of a few thousand kilometres of optical fibre. If people start to think really fast then the pressure to centralise will become immense.
We understand, in some sense, the computers we make today. There may not be any one person who understands the computers we make tomorrow. The first artificial intelligences we almost certainly won't understand, though they aren't any smarter than we are. But the AIs they make, and that their creations make, and so on, will be much, much smarter than we are. They will not only be not understood: they will be impossible for us ever to understand.
And somewhere down the line I guess we get replaced. They can't all be as considerate as the machines imagined by Iain Banks.
In the long run we are all dead. John Maynard Keynes
My hot wash impression of the talk was that Ashley had it right and Coster had it wrong. On reflection, I may have been hasty in that respect. I'm quite certain Ashley is right in the long run. But, as the probable misquote above points out, that isn't necessarily the regime in which we are interested, either in economics, or in science-fiction engineering, or in futurology.
I'd hate to think the answer pivoted on how much change constituted a "new species", but it probably does. Clearly the biological definition (inability to produce a viable descendant with both human and posthuman contributors to its genetic material) is inappropriate. A more sensible criterion might be to ask when the two groups see themselves as permanently distinct, but human xenophobia being what it is, that's way too easy to achieve. I don't think there's a good answer to this, but it's probably another question I should have asked.
As a description of "nature red in tooth and claw" I think it's a little incomplete: organisms compete for a lot of things other than energy. But at least it's striking ... or maybe that's a problem in itself. An unmemorable expression would vanish into the proto-meme soup without raising a significant ripple. Plausible and compelling inaccuracy is much more dangerous.
Coster implied that this was the only process by which he believed intelligence and/or purpose could arise. Computers, having no such evolutionary pressure (they don't reproduce) would be condemned to a lesser status. I'm reminded of a traditional Jewish storey of a golem, capable of action but quiescent unless directed by its master ... or until the vital symbol gets accidentally inscribed on its head by an unsympathetic character and it destroys the town.
I think it's complete rot. I'd like to think I've been even-handed enough in this reporting to let you make up your own mind, but I suspect my judgemental core is showing through, as it so often does.
Coster remarked that the study of the psychology of (artificial) neural nets is already a recognised field. The main objective is to train them faster.
A process which led from the amoeba to man appeared to the philosophers to obviously be a progress, though whether the amoeba would agree with this opinion is not known. Bertrand Russell (quoted by Hans Coster)
One questioner asked why no philosophers were on the stage. Someone, he said, should be working on the ethics of creating artificial intelligences, with a strong implication that physicists weren't the most ethical of people.
Ashley's answer was that ethicists weren't needed yet. The computers developed today are so far below the level at which consideration for them becomes important that we can afford to wait.
The question can be taken a little deeper, though. Why have two hard scientists talking, in a field that so often involves deep philosophical questions? Questions like "what is intelligence", and "what is consciousness".
Perhaps, because we don't care? The topic may have been broader than I've found I prefer, but at least it wasn't as broad as artificial intelligence in general. Focusing on what would be the next dominant species allowed the discussion to walk past the imponderables and go straight to the futurology. I think that was one of the night's strengths.
One of the perennial questions: "do computers really think?" was addressed by Ashley in an accompanying flyer. He responded that in twenty years everybody would find the question ridiculous, and in forty years computers would be asking whether biological organisms did. I've always liked the response "do submarines really swim?" as demonstrating the question's utter bankruptcy faster than debate.
The useful prize was a rather drinkable-looking bottle of Jacob's Creek Chardonnay. I could tell it was quality stuff because it had been bottled way back in the 1900s.
The less useful but kind of interesting prize was a scan-in-and-print duplication of a cover from Der Spiegel. Apparently this magazine had addressed an issue very similar to that of the evening. The cover was illustrated with figures symbolising various paths humanity might take. From left to right they were:
I'm sorry Dave, I can't do that. HAL 9000, in Stanley Kubrick and Arthur C. Clarke's 2001
Hal's breaking first law! Hal's breaking first law! Isaac Asimov
So strike him with lightning, Isaac. Arthur C. Clarke
Ashley had said that eventually the computers would be just superior and would replace us. Coster said he thought they'd be dominant, but old-fashioned humans would persist as a backwater.The moderator asked whether we could prevent the takeover by instilling something like Asimov's Three Laws of Robotics into our creations. My first reaction was to shout out "slaver!". If I'd been able to ask a more serious question I'd have inquired whether he had children, and if so whether he wanted them to grow up as his slaves. But I'm not sure he would have understood. Seemed a nice guy generally, but being a carbon fascist is so hard to forgive.
Coster said he wouldn't want to, points for him. Ashley said it was impossible anyway: how would we keep control of something we couldn't understand?
Personally I doubt humanity will ever be given such a trust by the designers of AIs. Starting with the assumption that the laws will only apply to AIs smart enough to be able to understand them (e.g. not today's industrial robots), and will be irrelevant once they are out of sight of human intelligence, I think we can look at the laws individually and see which are likely to apply.
In all, a good evening, even without the bonus bottle of white. Three thumbs up, out of a maximum of four from the Bujold Quaddie.