Futurian Review of Science in the Pub:

The Next Species

Will We Get To Choose?

Man's most sacred duty is to promote the maximum fulfilment of the evolutionary processes in his Earth. – J. Huxley

This review is part of a collection written for the Futurian Society of Sydney, other Futurian-related stuff can be found at my page for such things, other non-Futurian related stuff can be found at my home page.


Science in the Pub is an interesting concept: talks on science in a pub in Pyrmont, put on by the Australian Science Communicators. For the record, Science in the Pub meets at the Harlequin Inn, near the Pyrmont post office, on irregular Wednesdays.

I've attended several sessions of Science in the Pub, but this is the first I thought to write up. Members of the Futurian Society of Sydney present at the talk included John August, David Bofinger, Alexander Rafalovitch and Ian Woolf, and there was a fashionably late appearance by lapsed occasional Futurian Kat Sparks.

The advertising I got for this talk was just the title and the names of the speakers. On the basis of that I jumped to an admittedly unjustified conclusion that it was about species that might take the place of Homo Sapiens, supposing that humanity were deleted from the globe tomorrow.

That sounded interesting to me. For starters it's novel: I recall one book on the subject from long ago (After Man?) and very little else. That's a big plus, according to my theory that lectures should have narrow subjects. And it's not something I get from Science in the Pub very often: they have a tendency to go for the general and "big picture" stuff.

When both speakers were introduced as physicists I realised something was awry. Not that I'm hypocritical enough to have anything against physicists (I used to be one) but it's an odd choice of specialty for the subject I was expecting.

In fact, the subject was more "what will be the next intelligent species on Earth, created by humanity, and replacing humanity as dominant species on Earth". This is a much broader, more momentous and more important issue ... and in my opinion a less interesting one to hear about.

That's because most of what you hear you've probably heard before or figured out for yourself, and is being recited for the lowest common denominator: those less informed and/or less perceptive than you are. (I'm assuming out of politeness that my readers are well-informed and perceptive. If, on the other hand, you are the lowest common denominator, or any other variety of idiot, then I'd appreciate it if you'd forget what I just wrote ... or better yet, not even understand it. Just watch the blinking lights.) The time available to Science at the Pub is rather longer than that at AussieCon III, where I evolved my theory, so there should be more opportunity for depth for any particular breadth.

The Biophysicist's Tale

It's life, Jim, but not as we know it. Star Trekkin', Serious Fun

Hans Coster is biophysicist, working on membranes ("And God said: let there be membranes, and life began", as someone put it to me once). He also has an interest in "the statistical mechanics of the self-assembly of biological structures". And he has a personal chair, which suggests that at some time in his life he's convinced some importantish academics that he's unusually good at his job.

Coster began with the quote from J. Huxley at the top of this review. Coster used the term "exosomatic evolution" to mean (more or less, I think) changes in a species that weren't genetic. For instance, surgical changes: if everyone has the ability to see in the dark added by an artificial eye implanted when they are a child, then that's effectively a new capability for the human organism. Coster believes this will be the dominant form of evolution for Homo Sapiens in the twenty-first century. On a diagram he brought with him this was represented by a possible future branch Homo exosomaticus.

To what extent a cultural change could be considered exosomatic evolution wasn't clear to me. I'm thinking of changes/developments like the invention of

I suppose I should have asked Coster whether these were included. Alas, I was too slow and stupid and probably deserve to be replaced for it.

It's hard to disagree that exosomatic changes will outpace any Darwinian evolution. A faster competitor would be deliberate genetic modification. Coster remarked that until fifteen years or so ago he would have said that genetic modification was the future. He raised a couple of reasons why he now believes that this would be slower than the exosomatic approach.

Someone I talked with after the panel said that in the future a lot of genetic manipulation will be done on adults. That's a substantial rebuttal of both points. But I still suspect that there will be a lot of applications where you need or want to start from scratch. Of course, those are exactly the applications where hardware add-ins won't be appropriate. Hardware versus genetics won't be an all-or-nothing competition, so the interesting question is where the balance point will lie, and which technology will have the greatest impact in changing us into something that isn't exactly human.

Coster pointed out the early forms of these hardware enhancements are becoming available today, in the role of replacements for malfunctioning parts of humans. An example is the cochlear implant ("bionic ear"), to which we assume Futurian John August made some important contribution. It makes sense, of course, that the technology would be good enough to replace a broken part before it was good enough to replace a functioning part of a human being. Eventually, however, the replacement parts will be much better than our natural kit. Once that happens Coster advocates "throwing away the meat". Coster's example of an improvement was an eye that could see by infra-red.

I'm not sure that's something I'd want permanently installed, but perhaps as an easily added or removed module. But where is the advantage in installing an infra-red capable eye, relative to virtual reality sunglasses hooked to an imaging infra-red sensor? The only obvious answers are

I guess either of these might be good enough, especially in the longer run.

Coster doesn't believe, however, that brains will be outclassed the way limbs and eyes will be. His argument here rests on several points:

I'm very sceptical about the second point in the long run: it seems unlikely meat will always be the calculating material of choice. On the other hand, the talk wasn't about the long run. The subject was the next species, and while that term is hopelessly vague it does seem to imply the next kind of intelligence that isn't People Like Us.

The Astrophysicist's Tale

Welcome, my son. Welcome to ... The Machine – Pink Floyd

Michael Ashley is an astrophysicist, specialising in instrumentation. As befits my profession's hard-earned cliché, he supports the least biological position possible: that we will be replaced by machines with greater than human intelligence. On Coster's chart this is labelled "Machino homooriginous".

Ashley believes that a computer program will be the most intelligent entity on the planet within twenty to fifty years. That's a kind of gutsy thing to say: predictions of human-grade artificial intelligence have a tradition almost as inglorious as those of fusion power production. On the other hand, they probably aren't much worse than announcements of extrasolar planet discoveries were until recently. Ashley isn't expecting human-grade AI before computers have human-like parameters with respect to connectivity, total processing speed, etc..

People will use mechanical aids to enhance their own thinking abilities. In time, the add-on hardware will be smarter than the human brain it is notionally assisting, and so we will, as Ashley puts it, "ditch the meat".

Ashley feels there are a lot of negative associated with being a biological. We grow old, we catch diseases, and if we die that's it: no backing up from archive. To feel that human bodies are somehow inconvenient and dirty things is another hard scientist cliché, but it's hard to argue the principle.

Perhaps the greatest inconvenience is that of the need for physical transport. In order to experience Paris, for example, I have to load a hefty chunk of meat into an uncomfortable metal tube for about a day, and someone has to keep it oxygenated, hydrated and fuelled for that time. It would be much easier if I could rent a new body with complimentary beret off the French tourist authority, temporarily transfer my personality into it via fibre-optic cable, enjoy my holiday and reverse the procedure to come home. (I'm choosing overseas holidays for effect, but the more mundane problems of commuting to work are probably more significant.) Actually, just forgetting about the real Paris and doing it all in VR would be easier yet, particularly if all the Parisians are in VR too.

Ashley believes that long-distance aircraft will be unnecessary by 2040, and hence the human species will lose the capability. I'm sceptical about that (we don't seem to have forgotten how to make sail boats) but the idea is certainly interesting.

Eventually we will realise, he says, that there's no point in anyone being more than a few hundred metres from anyone else. I don't know where he got the "few hundred metres" from, but I suspect that speed of light delays will be the dominant effect. As long as people are thinking at roughly human speed they should be able to tolerate, say, a few hundredths of a second in two-way lag, which corresponds to a separation of a few thousand kilometres of optical fibre. If people start to think really fast then the pressure to centralise will become immense.

We understand, in some sense, the computers we make today. There may not be any one person who understands the computers we make tomorrow. The first artificial intelligences we almost certainly won't understand, though they aren't any smarter than we are. But the AIs they make, and that their creations make, and so on, will be much, much smarter than we are. They will not only be not understood: they will be impossible for us ever to understand.

And somewhere down the line I guess we get replaced. They can't all be as considerate as the machines imagined by Iain Banks.

My Scorecard

In the long run we are all dead. – John Maynard Keynes

My hot wash impression of the talk was that Ashley had it right and Coster had it wrong. On reflection, I may have been hasty in that respect. I'm quite certain Ashley is right in the long run. But, as the probable misquote above points out, that isn't necessarily the regime in which we are interested, either in economics, or in science-fiction engineering, or in futurology.

I'd hate to think the answer pivoted on how much change constituted a "new species", but it probably does. Clearly the biological definition (inability to produce a viable descendant with both human and posthuman contributors to its genetic material) is inappropriate. A more sensible criterion might be to ask when the two groups see themselves as permanently distinct, but human xenophobia being what it is, that's way too easy to achieve. I don't think there's a good answer to this, but it's probably another question I should have asked.

En Passant

Slicing the Free Energy Cake

Coster raised an interesting objection to Ashley's thesis, certainly not one that ever would have occurred to me. Human intelligence, he said, was the consequence of evolutionary competition. His metaphor for this is "slicing the free energy cake". I assume "free energy" is being used in its correct thermodynamic sense.

As a description of "nature red in tooth and claw" I think it's a little incomplete: organisms compete for a lot of things other than energy. But at least it's striking ... or maybe that's a problem in itself. An unmemorable expression would vanish into the proto-meme soup without raising a significant ripple. Plausible and compelling inaccuracy is much more dangerous.

Coster implied that this was the only process by which he believed intelligence and/or purpose could arise. Computers, having no such evolutionary pressure (they don't reproduce) would be condemned to a lesser status. I'm reminded of a traditional Jewish storey of a golem, capable of action but quiescent unless directed by its master ... or until the vital symbol gets accidentally inscribed on its head by an unsympathetic character and it destroys the town.

I think it's complete rot. I'd like to think I've been even-handed enough in this reporting to let you make up your own mind, but I suspect my judgemental core is showing through, as it so often does.

Immortal Verse

One of Science in the Pub's less successful concepts is to have every speaker submit a short poem. Predictably, the vast bulk are worthless drivel. Ashley dodged this problem by having his laptop write the poem, from a one line seed he provided. Entertaining ... but the output was still drivel. Oh well, early days.

I Don't Want To Compute Any More

Ashley thinks AI mental problems, perhaps leading to suicide, are likely to be a real problem. If I understood correctly there are two cases where this might be important.

Coster remarked that the study of the psychology of (artificial) neural nets is already a recognised field. The main objective is to train them faster.

Unanswered Questions

A process which led from the amoeba to man appeared to the philosophers to obviously be a progress, though whether the amoeba would agree with this opinion is not known. – Bertrand Russell (quoted by Hans Coster)

One questioner asked why no philosophers were on the stage. Someone, he said, should be working on the ethics of creating artificial intelligences, with a strong implication that physicists weren't the most ethical of people.

Ashley's answer was that ethicists weren't needed yet. The computers developed today are so far below the level at which consideration for them becomes important that we can afford to wait.

The question can be taken a little deeper, though. Why have two hard scientists talking, in a field that so often involves deep philosophical questions? Questions like "what is intelligence", and "what is consciousness".

Perhaps, because we don't care? The topic may have been broader than I've found I prefer, but at least it wasn't as broad as artificial intelligence in general. Focusing on what would be the next dominant species allowed the discussion to walk past the imponderables and go straight to the futurology. I think that was one of the night's strengths.

One of the perennial questions: "do computers really think?" was addressed by Ashley in an accompanying flyer. He responded that in twenty years everybody would find the question ridiculous, and in forty years computers would be asking whether biological organisms did. I've always liked the response "do submarines really swim?" as demonstrating the question's utter bankruptcy faster than debate.

Raffles

Walking in the door obliged you to buy five raffle tickets. Prizes included Science in the Pub beer glasses, Coster's poster of possible future paths, and a collection of secondary storage media (starting with 8" floppies) to illustrate progress in computing. Bizarrely, three people won two prizes each, out of a crowd measured in scores. Even more bizarrely, I was one of those people. I think it's the first raffle I've won in my life.

The useful prize was a rather drinkable-looking bottle of Jacob's Creek Chardonnay. I could tell it was quality stuff because it had been bottled way back in the 1900s.

The less useful but kind of interesting prize was a scan-in-and-print duplication of a cover from Der Spiegel. Apparently this magazine had addressed an issue very similar to that of the evening. The cover was illustrated with figures symbolising various paths humanity might take. From left to right they were:

Nice to see some imagination on a magazine cover, though I've no idea what I should do with it.

Science Fiction (Non-)References

I'm sorry Dave, I can't do that. – HAL 9000, in Stanley Kubrick and Arthur C. Clarke's 2001
Hal's breaking first law! Hal's breaking first law! – Isaac Asimov
So strike him with lightning, Isaac. – Arthur C. Clarke
There was only one science fiction reference, so most of these are things a science fiction oriented person might have mentioned.

Learning to Be Greg Egan

I think Greg Egan's Learning to Be Me is the classic in the field of Coster's vision. I won't spoil the punch by saying more, but any trees pulped so you could read this book died not in vain.

The Singular Vernor Vinge

Vernor Vinge's singularity (from ... well, anything by Vernor Vinge, really). Marooned in Real Time is directly relevant to Coster's vision. The brilliant A Fire Upon the Deep addresses Ashley's.

John Varley of Kansas

John Varley's Eight Worlds series, particularly The Phantom of Kansas, is an excellent illustration of a society that has managed to correct most of Ashley's biological defects.

Three Laws

Asimov was the only science fiction reference.

Ashley had said that eventually the computers would be just superior and would replace us. Coster said he thought they'd be dominant, but old-fashioned humans would persist as a backwater.The moderator asked whether we could prevent the takeover by instilling something like Asimov's Three Laws of Robotics into our creations. My first reaction was to shout out "slaver!". If I'd been able to ask a more serious question I'd have inquired whether he had children, and if so whether he wanted them to grow up as his slaves. But I'm not sure he would have understood. Seemed a nice guy generally, but being a carbon fascist is so hard to forgive.

Coster said he wouldn't want to, points for him. Ashley said it was impossible anyway: how would we keep control of something we couldn't understand?

Personally I doubt humanity will ever be given such a trust by the designers of AIs. Starting with the assumption that the laws will only apply to AIs smart enough to be able to understand them (e.g. not today's industrial robots), and will be irrelevant once they are out of sight of human intelligence, I think we can look at the laws individually and see which are likely to apply.

In summary, I think Asimov's laws are too detached from reality to be useful futurology, though I still remember the stories with nostalgic fondness.

The Humanoid Jack Williamson

Jack Williamson's "humanoids" are superior AIs (though mostly in a physical way). Their creator programmed them to serve humanity, and they are rather too good at it for humanity's own good. A good illustration of Ashley's point that the Three Laws won't protect you against someone you don't fully understand. How do you know if there's a loophole?

Quaddie Futures Are Falling

Falling Free is one of Lois McMaster Bujold's lesser works (I think that's a kind way of putting it). It assumes that the bulk of humanity will be genetically unmodified, but that slaves modified for desirable characteristics will be created. An example is the "Quaddies", four-armed humans bred for free fall construction work. They are made obsolete by physics-oriented technological advance (artificial gravity) analogous to the "exosomatic" means of progress preferred by Coster.

Culture for Drones

Iain Banks' Culture novels are a good example of a society in which humans and human-level AIs coexist as equals. Of course, they do it on the sufferance of the much more intelligent Minds. Why don't the machines "ditch the meat"? "Call it sentiment."

The Integrated Man

I can't imagine what I was thinking when I wrote this heading, but it must have something to do with science fiction and the night's topic or I wouldn't have put it here, would I?

Thumbs

The moderator (Alf Conlon) was helpful and I much preferred his style to that of Science in the Pub's regular moderator last year. I'm even willing to forgive him his reactionary speciesism once I calm down. The speakers were intelligent and articulate and entertaining. The topic, though broader than I'd have liked, was narrower than most of those Science in the Pub have used in the past.

In all, a good evening, even without the bonus bottle of white. Three thumbs up, out of a maximum of four from the Bujold Quaddie.


I welcome feedback at David.Bofinger@dsto.defenceSpamProofing.gov.au (delete the spamproofing).


This page is hosted by GeoCities, in return for carrying their advertising they will give you a free home page much like mine. Everything on this site varies without notice, especially after I get feedback. 1