|   home   |   subscribe   |   jobs   |


N E T W O R L D

Machines can already tune into brain waves to bypass damaged nerves, and the day is coming when they may even plug into the brain itself. PETER THOMAS investigates


in the latest thriller to hit British screens, Keanu Reeves plays a man with a digital memory built into his brain. Using a neuroconnection in the back of his head, the eponymous hero, Johnny Mnemonic can plug into any computer and store or download data. With this memory upgrade, his brain acts like the hard disc on a computer and Johnny can use this ability to carry secret digital information around the world.

And while you settle down to enjoy Hollywood's take on the future, all over the world researchers really are working out how to connect the brain directly to digital devices such as computers, databases and video cameras. Their mission is to develop thought-controlled computers and memory upgrades that will make today's keyboards, monitors and other interfaces obsolete. It may even become possible to communicate over digital networks by thought alone.

But while all this is far futures stuff aimed at "enhancing" people and society as a whole sometime next century, the quest is proving positive in the shorter term. So much so that if the researchers can perfect their ideas, quite soon they may be able to help people with disabilities use their brain waves to control wordprocessors and wheelchairs, or train pilots to fly fighter aircraft using simulators controlled by their minds.

Progress, however, is limited by what little we know about the way the brain works. There is one big problem facing researchers who want to connect the brain directly to digital devices--or, more prosaically, design some sort of brain-machine interface. Unlike a computer, the brain does not rely on a steady stream of digital bits and bytes. Connecting them together is not be easy.

One possibility is to measure the night-and-day activity inside our brains. This unceasing communication of billions of neurons produces bursts of electricity that can be measured outside the skull using an electroencephalograph or EEG. Interpreting these signals is extremely difficult because of the complex connections between neurons and the sheer volume of signals that the brain handles. But it is these signals that some researchers hope to use to control computers.

Neuroscientists have long known that the patterns of activity produced by the brain depend on its state. For example, when we relax, our brains produce signals with a frequency of between 8 and 13 hertz, known as alpha waves. But when we get excited, we produce beta waves of between 15 and 30 hertz, while other frequencies are associated with sleep or intellectual activity.

In theory, a computer monitoring these different EEG signals could be programmed to turn a light on or move a cursor to the left if it spotted a change from alpha to beta waves, for example. Controlling the computer would be a relatively simple task of training your mind to relax or become excited--a task already proven in experiments. Unfortunately, such a simple feedback system is too limited to provide a useful interface between humans and computers. So researchers have begun to look more closely at EEG signals to see if whether there are more complex patterns that computers could respond to.

In Taiwan, for example, Shiao-Lin Lin and colleagues in the Department of Neurology at the National Taiwan University Hospital in Taipei, have begun to uncover a deeper structure in the patterns. Shiao-Lin says that EEG signals appear to be extraordinarily complex. In 1993, the team discovered that the brain produces spikes of activity a fraction of a second after a mental event such as looking at a series of characters on a display. Similar spikes occur before an action takes place, when the brain is preparing to move the fingers holding a computer joystick, for example.

It may even be possible to link certain patterns of brain waves with specific mental tasks such as rotating an imaginary three-dimensional object. Shiao-Lin believes that if these signals can be fully analysed, a computer could be trained to recognise and act on characteristic patterns that correspond to specific thoughts.

Last year, at the University of Tottori, near Osaka in Japan, a team of computer scientists led by Michio Inoue took this idea further by analysing the EEG signals that correspond to a subject concentrating on a specific word. This research is designed to allow sufferers of degenerative nervous diseases who cannot control their bodies to communicate using thought.

The system depends on a database of EEG patterns taken from a subject concentrating on known words. To work out what the subject is thinking, the computer attempts to match their EEG signals with the patterns in the database. For the moment, the computer has a vocabulary of only five words and takes 25 seconds to make its guess. In tests, Inoue claims a success rate of 80 per cent, but he is working on improvements that will make the system even more accurate and hopes to have a version on the market within a year or two.

EEG signals could be used in even more complex situations. Earlier this year at Imperial College, London, Stephen Roberts, a lecturer in neural computing, started to look at ways in which EEG signals can help people with disabilities control their wheelchairs. Roberts is concentrating on brain activity generated by the intention to move a limb--signals which are still produced even when movement does not take place.

But computer-based analyses of EEG signals are never 100 per cent accurate. "For control of a wheelchair, even 90 per cent accuracy is not good enough," says Roberts. To double-check someone's intention to move in a particular direction, his system also monitors eye movements. If the line of sight doesn't match the computer's EEG prediction, something is wrong. "It is better to stop the wheelchair for one second rather than make a false move," he says.

Elsewhere, at labs such as the Alternative Control Technology Laboratory at the Wright-Patterson Air Force base in Dayton, Ohio, researchers are investigating ways of controlling flight simulators using EEG signals.

Despite some successes, work like this has highlighted the obstacles that stand in the way of thought-controlled computers. Most people do not usually concentrate on the way their brain works and at the Wright-Patterson base, many subjects had difficulty consciously controlling the type of EEG signals that the brain produces--a very different action to normal thought processes. In Japan, the sheer complexity of EEG signals is still proving a major headache to analyse--even using sophisticated computer-based systems such as neural networks.

Both problems limit the range of useful tasks, suggesting that EEG signals may not be an effective way to perform the complex tasks that are routine using other existing interfaces such as a keyboard and mouse. Would it be possible to use an EEG interface to compose a letter? The ideal solution would be for the letter "e" to appear on a computer display when we think it. But detecting the appropriate EEG pattern or training people to produce other more recognisable patterns that would correspond to "e" (never mind the rest of the alphabet and the many other tasks that would be necessary) appears almost impossible. Is there a better way of achieving "mind over computer"?

Which brings us back to the far futures and the researchers who believe that a far better way to connect the mechanisms of our brain to the bits flowing around a computer is to connect them directly. Not surprisingly, research into "neurocompatible interfaces" is in its infancy although the basic techniques and technology are developing rapidly.

At the Department of Bioengineering at the University of Utah, Richard Normann's team has come closest to "jacking" into the brain. They have been developing ways of supplying video images directly to the brains of people who have lost their sight. The problems of blindness provide an ideal test application for neuroprosthetics because most forms of blindness are due to defects or damage to the eyes. This leaves the complex neural machinery of vision in the brain known as the visual cortex still working.

The visual system is especially useful because it is highly adaptive, intelligent, and self-regulating. In 1974, William Dobelle, also at the University of Utah, found that direct stimulation of the visual cortex in blind people evokes "phosphenes"--points of light which are similar to those created by signals passing directly from a visual system which is working properly. The results were encouraging: people were able to "read" Braille characters made up of phosphene patterns created by direct stimulation faster than they could read them with their fingertips.

More recent research in 1992 and 1993 at the National Institutes of Health and Johns Hopkins University in the US has centred on improving the quality of the artificial vision experienced by blind subjects ("Sight for sore eyes", New Scientist, 19 August 1995). All of this suggests that blind subjects may be able to adapt well to new forms of prosthetic visual systems.

Direct data

Normann's group has been developing devices that can be implanted directly into the brain. They consist of an array of 100 "needle" electrodes resembling a tiny hairbrush. Each needle is less than 2 millimetres long, isolated from its neighbour by a glass sheath and mounted on a silicon base about 4 millimetres square. The idea is to capture images using a video encoder, transform them into electrical signals and excite neurons in the visual cortex of the brain using the electrode array to produce an image directly in the mind.

At the moment, results seem to show that the approach is capable of creating artificial vision, even though it may appear to the subject like a grainy version of reality--similar to looking at the large scoreboards at football stadiums. Of course, Normann's work is confined to people who have no other hope of seeing again and he finds it difficult to imagine healthy, novelty-seekers risking such surgery.

Other research is approaching the great link-up from a different angle--using the body's existing information channels to the brain such as the eyes and ears. Thad Starner, a researcher at the Media Lab at the Massachusetts Institute of Technology, believes that the surgically invasive technology needed to jack in to the brain has yet to be demonstrated. Instead, he is working on "wearable computers", tiny micro-processors worn on the body that are in continuous communication and even connect to the Internet.

The basic building blocks for wearable computers are already around. Low-power, credit-card sized computers with the power of 486 PCs are already on the market. The US Army has developed wearable computing and communications devices such as head-mounted displays, cameras and personal communicators to receive and transmit information on the battlefield. And Starner says the army has already tested "augmented soldiers" in the field.

According to Starner, the killer application will be augmented memory, rather like Johnny Mnemonic's except that the hard disc will be outside the brain. The idea is that a total recall system will selectively record a user's life using face recognition, voice recognition, and some sort of global positioning system to track location ("Don't forget your memory aide", New Scientist, 5 February 1994). Starner says that when greeting colleagues, your "remembrance agent" will recognise them and suggest the top five pieces of information relevant to the conversation.

Wearable computers provide several advantages. One is that the computer is always switched on and always immediately accessible. By contrast, current palmtop computers must be opened, turned on, attention must be focused on the screen, and then both hands must be used to operate it. "Compare this to a wearable computer with a head-up display and a one-handed keyboard," Starner suggests. "With my wearable I can store names and interesting snippets of conversations while shaking hands and maintaining eye contact during professional meetings."

The second advantage of wearable computers is consistency. Most interactions with computers using keyboards, pens, mice or whatever require training each time a new system is developed. Wearable computing promises a single consistent interface. "Since so much time is spent with the wearable interface, users tend to get very proficient with it and customise it to their needs. Indeed, it is worth their effort to customise the interfaces since these devices are designed for long-term, intimate relationships," says Starner.

Starner's approach--enhancing human abilities with computing technology--is in marked contrast to Normann's work--largely centred round replacing lost or missing functions. Somewhere in the middle is Chip Maguire. Based at the Royal Institute of Technology in Sweden, Maguire believes that wearable computers will be limited in the same way as EEG-controlled computers and the only way to get a usable human-computer interface is by using direct connections with the brain. Maguire suggests that these would allow us to have a system installed inside our heads which would provide voice communications and an "eyes-up" display which would superimpose text and pictures on our normal vision.

The first group of people to use the devices will be the disabled, says Maguire, but eventually he believes that other people will also willingly undergo the surgery that neurocompatible computers require. Robosoldier. One of the first groups of able-bodied volunteers will be the professional military, he thinks. After them, might come those involved in information-intensive business such as foreign exchange dealers. Maguire expects that the first prototypes will be around in about five years and that military systems will appear within ten years. But other users may have to wait for two or three decades before the technology becomes acceptable.

If Maguire is right, then life in the 21st-century could become more complex than any Hollywood thriller. Aside from any moral and ethical objections, the concept of beings that are part-human, part-machine raises many practical issues.

For example, if we have software embedded in our brains, how do we ensure its quality and reliability? What happens when there is a new hardware upgrade or a new software release? What if somebody discovers a software bug or a design error? Even a Hollywood script writer would be hard-pressed to picture the consequences.


PETER THOMAS is Professor of Information Management at the University of the West of England, Bristol.

1