thread one "clones/'perfect'mates/self-image"
thread two "expanding forever..."
thread three "trouble in digital heaven?"
thread four "Singularity: Just Say No!"
Moving away from sexuality, it is interesting to speculate on whether we would like people who were identical to ourselves. With uploading and duplication, it may be that we will be able to make copies of ourselves with ease in the future. This might be one of the primary forms of reproduction. You could have an "extended family" of duplicates of yourself, who have all diverged to various degrees, but who cooperate, play together, get together for family reunions, and generally support each other.
Some people don't seem to be the sort who would get along with copies of themselves. An extremely aggressive or selfish person would prefer the company of others who would be easy to manipulate, and so copies would lead to conflict. Someone who was suspicious and paranoid might develop fears that his copies were conspiring against him. A person who was emotionally unstable and prone to mood swings could find them amplified by being around other people who were the same way.
Other personalities would be more likely to be self-compatible. People who are easy-going, kind, generous and helpful would generally enjoy being around other people who had the same traits. Essentially, those characteristics which help people get along well with others would be good for self-compatibility.
There might also be people who have peculiar and anti-social personalities but who are also self-compatible. An introvert, shy and withdrawn around other people, could be very comfortable with his copies.
Do you think it is better to be the kind of person who gets along with himself? Should we strive to be self-compatible? Is it a sign of a personality flaw if you would not like your own copies?
Hal
-------------------------------------------------------------------------
Date: Mon, 28 Dec 1998 13:03:46 -0600
From: "Billy Brown" <bbrown@conemsco.com>
Subject: RE: clones/'perfect'-mates/self-image
Hal Finney wrote:
> You could have an "extended family" of duplicates of
> yourself, who have all diverged to various degrees, but who cooperate,
> play together, get together for family reunions, and generally support
> each other..
I suppose self-compatibility would be important if you wanted to make
such a 'duplicate family', but I have trouble envisioning a scenario in
which many people would actually do so. Copying personalities (presumably
via an upload/download process) requires that we have very advanced nanotechnology,
which in turn requires huge improvements in our ability to create complex
systems. As a result, it should be possible to do much better than
simply duplicating yourself.
If uploading is possible at all, it should be feasible to re-engineer an uploaded mind just like any other piece of software. You could run multiple instances of your sensory-processing and motor-control software in parallel, allowing you to control more than one body at once. For better fault tolerance you could even turn yourself into a collective mind, with local processing in each body so that they can function if your data links go down. Spread yourself across a few hundred humanoid bodies and a similar number of useful robots and/or vehicles, and you become very hard to kill. How's that for an interesting post-human mode of existence!
Billy Brown, MCSE+I
bbrown@conemsco.com
-----------------------------------------------------------------------------------------------------------------
Date: Mon, 28 Dec 1998 13:19:39 -0800
From: Hal Finney <hal@rain.org>
Subject: RE: clones/'perfect'-mates/self-image
Billy Brown, <bbrown@conemsco.com>, writes:
> Hal Finney wrote:
> > You could have an "extended family" of duplicates of
> > yourself, who have all diverged to various degrees, but who cooperate,
> > play together, get together for family reunions, and generally
support
> > each other..
>
> I suppose self-compatibility would be important if you wanted to
make such a
> 'duplicate family', but I have trouble envisioning a scenario in
which many
> people would actually do so. Copying personalities (presumably
via an
> upload/download process) requires that we have very advanced nanotechnology,
> which in turn requires huge improvements in our ability to create
complex
> systems. As a result, it should be possible to do much better
than simply
> duplicating yourself.
It is not necessary to download; copying uploads should be much easier. Of course, you have to be able to upload in the first place, but that may not require advanced nanotechnology. It could be as simple as some kind of high-resolution MRI, or perhaps it could be done by freezing you, slicing you up and scanning in each slice, then running software which simulates the effect of undoing the freezing damage.
Any upload will probably require considerably more computing technology than is feasible today, but there are other paths than super nanotech.
> If uploading is possible at all, it should be feasible to re-engineer
an
> uploaded mind just like any other piece of software. You could
run multiple
> instances of your sensory-processing and motor-control software in
parallel,
> allowing you to control more than one body at once. For better
fault
> tolerance you could even turn yourself into a collective mind, with
local
> processing in each body so that they can function if your data links
go
> down. Spread yourself across a few hundred humanoid bodies
and a similar
> number of useful robots and/or vehicles, and you become very hard
to kill.
> How's that for an interesting post-human mode of existence!
It is interesting, but I question the assumption that uploading will automatically imply the ability to re-engineer minds.
Theoretically, uploading is a rather mechanical process: it is merely
a matter of simulating a particular physical system (the brain/body) at
a sufficient level of detail. The technical difficulties are a matter
of
sensing the object at that resolution, and having enough computer power
to run the simulation. But perhaps these can be overcome.
Re-engineering brains requires a wholly different level of understanding. Uploading is like painting a copy of a Rembrandt. Re-engineering is like being Rembrandt. It is a creative action, not a mechanical one.
You would need detailed understanding of how the brain works in functional terms. You have to know what to tweak and how to tweak it. You would have to understand consciousness and how it is related to brain activity, a matter which appears intractably difficult today.
True, having the ability to upload may help with our understanding of
these issues, by giving access to neural function at a fine level of detail
(and possibly allowing experimentation on uploads). But I think
it is going to take a long time before we are able to re-engineer minds,
even once we have nanotech.
Hal
-----------------------------------------------------------------------------
Date: Mon, 28 Dec 1998 16:14:07 -0600
From: "Billy Brown" <bbrown@conemsco.com>
Subject: RE: clones/'perfect'-mates/self-image
Hal Finney wrote:
> It is not necessary to download; copying uploads should be
> much easier..
Yes, that's what I meant. Upload, make however many copies you want, then run them in VR or download to more-or-less-human bodies.
> Of course, you have to be able to upload in the first place, but that
> may not require advanced nanotechnology. It could be as
> simple as some kind of high-resolution MRI, or perhaps it could be
done
> by freezing you, slicing you up and scanning in each slice, then
running
> software which simulates the effect of undoing the freezing damage.
>
> Any upload will probably require considerably more computing technology
> than is feasible today, but there are other paths than super
> nanotech.
Running a brute-force simulation of the matter in the human mind, based on some kind of molecular simulation software, would take somewhere in the general neighborhood of 10^30 MFLOPS. In contrast, duplicating the actual processing the human brain performs would take something like 10^8 MFLOPS. With the usual assumption of a 2-year doubling time for computer power, we will be able to run that second program in a supercomputer by 2010, but the brute-force approach won't be possible until around 2160. Do you really think its going to take us that long to build the first assembler?
Besides, to make that curve hold we will need to start using nanotechnology to build computers by the middle of the century.
> Re-engineering brains requires a wholly different level of
> understanding.
>
> You would need detailed understanding of how the brain works
> in functional terms. You have to know what to tweak and how
to tweak
> it. You would
> have to understand consciousness and how it is related to
> brain activity, a matter which appears intractably difficult today..
Understanding a human mind isn't nearly as hard as building a computer
capable of simulating the human brain in complete detail. The technology
required to keep Moore's law going through the 21st century would also
give us the ability to probe the workings of a living brain on a routine
basis.
This, in turn, would allow us to create computer simulations of its
functional components (using advanced neural net systems, not chemistry
simulations). Such simulations would trivial amounts of computing
power for the machines then in use, which means that researchers would
be able to test
alternative theories quickly and easily. Once you can do that,
I have trouble seeing how it could take more than a few decades to unravel
how the whole system works.
Even by a conservative estimate, that gives us the ability to design brain modifications by the end of the 21st century. At that point we are still 60 years shy of being able to run the brute-force sim. So far as I can see, anything that puts off the first development would delay the second by just as much. I think I feel safe making a call on this one...
Billy Brown, MCSE+I
bbrown@conemsco.com
---------------------------------------------------------------------------------
Date: Mon, 28 Dec 1998 23:02:22 +0100 (CET)
From: Eugene Leitl <root@lrz.uni-muenchen.de>
Subject: RE: clones/'perfect'-mates/self-image
On Mon, 28 Dec 1998, Hal Finney wrote:
> It is not necessary to download; copying uploads should be much easier.
> Of course, you have to be able to upload in the first place, but
that
> may not require advanced nanotechnology. It could be as simple
as some
Absolutely. One of the major points of uploading is that you don't need to make your reconstruction verbatim (in the flesh), just run a number of (pretty smart) data filters over a (pretty large) dataset. One of the more trivial instances would seem to be the substitution of vitrification agents by pristine vitrified water purely in a computational model (such a filter does not require much intelligence, a human could certainly write it). Such an early filter stage is obviously useful in uploading, but absolutely crucial in nanoresurrection (vanilla cryonics), where a patient could be disassembled via micron-thin fronts of dissassembly/reassembly, with the filtering done interim. In a sense, the patient is washed over with material waves of processing, during each being piecemeal-wise yanked into the virtual realm and back.
> kind of high-resolution MRI, or perhaps it could be done by freezing you,
High-enough-resolution MRI sans sample destruction is physically infeasible. However, even now cryo AFM achieves molecular resolution on tissue cryosections, and this technique is in principle combinable with tip abrasion/freeze/UV etch. Making a molecular-resolution map of a vitrified tissue block is actually quite straightforward, i.e. you can already see all the neccessary technologies, and assess the difficulties. It looks indeed quite doable, imo.
> slicing you up and scanning in each slice, then running software which
> simulates the effect of undoing the freezing damage.
Yep. Such filtering stages obviously need to be neuronal. You could imagine simultaneously applying a ball-shaped "3d retina" to each area of the voxel dataset, simultaneously removing all kinds of artefacts, from low to high level.
> Any upload will probably require considerably more computing technology
> than is feasible today, but there are other paths than super nanotech.
Any upload will _certainly_ require considerably more computing technology than is feasible today, indeed the artificial reality renderer alone is considerably beyond the state of the art. So what? molecular circuitry of any flavour is certainly feasible (look into mirror if you don't believe me), whether Drexlerian, or not. And recently the progress in Drexlerian nanotechnology is looking very good indeed. Suddenly, vitrification of macroscopic human organs appears within reach, while simultaneously research in fullerene and diamondoid autoreplication by mechanosynthesis is making giant strides forward. Interesting times, and all.
> > If uploading is possible at all, it should be feasible to re-engineer
an
> > uploaded mind just like any other piece of software. You
could run multiple
Difficult. An uploaded mind is not exactly 'software', just a giant blob of data, being directly 'executed' by dedicated hardware. It is far from clear how you can mutate that relatively opaque blob of data towards a certain goal. Perhaps in increments, over a population...
> > instances of your sensory-processing and motor-control
software in parallel,
> > allowing you to control more than one body
at once. For better fault
Err, there are no clean interfaces. You would have to restructure much of your mind. While I can see how to write a vitrification/scanning artifact removal filter, I cannot see how to constructively operate at such high abstraction levels. Either we learn the tricks necessary, or we need something quite beyond the human level to analyze, and to recast us in a new shape.
> > tolerance you could even turn yourself into a collective mind, with
local
> > processing in each body so that they can function if your data
links go
> > down. Spread yourself across a few hundred humanoid bodies
and a similar
In that context, the body is just a servo. If you are talking about a cluster mind (borganism), the body is highly secondary.
> > number of useful robots and/or vehicles, and you become very hard
to kill.
> > How's that for an interesting post-human mode of existence!
>
> It is interesting, but I question the assumption that uploading will
> automatically imply the ability to re-engineer minds.
Agree. One could imagine some incremental morphing, but this seems to be an invasive enough process to blow away the personality we so laboriously set out to conserve during the upload.
> Theoretically, uploading is a rather mechanical process: it is merely
a
> matter of simulating a particular physical system (the brain/body)
at a
> sufficient level of detail. The technical difficulties are
a matter of
> sensing the object at that resolution, and having enough computer
power
> to run the simulation. But perhaps these can be overcome.
If you don't need the quantum level, the amount of computation necessary is large, but containable.
> Re-engineering brains requires a wholly different level of understanding.
> Uploading is like painting a copy of a Rembrandt. Re-engineering
is
> like being Rembrandt. It is a creative action, not a mechanical
one.
I don't quite agree. In theory, you could imagine a purely mechanical process transferring an upload from some low-level encoding (say, around comparmental simulation level, or even MD) to some more abstract, nearer-to-hardware level in a mindless, purely darwinian process. I could imagine it working in increments, transmuting the wetware incrementally, block by block.
> You would need detailed understanding of how the brain works in functional
> terms. You have to know what to tweak and how to tweak it.
You would
> have to understand consciousness and how it is related to brain activity,
> a matter which appears intractably difficult today.
Don't think so, for above reasons.
> True, having the ability to upload may help with our understanding
of
> these issues, by giving access to neural function at a fine level
of
> detail (and possibly allowing experimentation on uploads).
But I think
> it is going to take a long time before we are able to re-engineer
minds,
> even once we have nanotech.
How "long" is long? Remember, the time base ratio is certainly about 1:1 k, maybe even as high as 1:1 M, and these ALife golems won't certainly be long to wait for, nor likely to be terribly idle once there...
> Hal
>
ciao,
'gene
"Billy Brown" <bbrown@conemsco.com> writes:
> In the short term, yes. In the long term ( >10^50 years) no
- when the
> average distance between adjacent particles is measured in light
years, the
> idea of building anything complex enough to be alive looks pretty
dubious.
Well, an entity composed of multi-million light year positronium pairs
and faint gravity waves, thinking thoughts over eons in a silent and cold
universe doesn't strike me as that bizarre. Much weirder things
are already happening in mathematics :-) Whether this is implementable
is another question, we need something like a billiard ball computer CA
example to see if it is feasible according to known laws of
physics, and then of course arguments for the practical implementability.
> > Even if that is true it doesn't change the problem of indefinite
> > survival. As far as I know nobody is suggesting steady state theories
> > at least, and without them you get a Dyson or Tipler choice, so
to
> > say.
>
> My point is simply that it isn't productive to speculate on the terminal
> evolution of the universe based on a theory that is almost certainly
> incorrect. It is possible that some moderate adjustment of
the big
> bang/inflation model will make it fit reality, but it is just as
likely that
> the entire concept will have to be thrown out.
You seem to be assuming that if you remove the big bang model, you will also need to get rid of everything associated with it - including the expansion of the universe, apparently the dynamics of spacetime and everything else done in cosmology. It is a bit like saying "because we don't have a correct formulation of quamtum gravity everything we know about quantum mechanics is wrong, so it is impossible to speculate on the future of solid state circuits".
Besides, I seriously doubt that the big bang theory is in that much trouble. So far none of the alternatives seems to get along without even weirder epicycles or have strong observational evidence.
> If that happens, who knows
> what the new parameters will be? Meanwhile, why generate angst
over purely
> hypothetical events that might happen billions of years in the future?
Hmm, why are we debating uploading and the singularity on this list?
-----------------------------------------------------------------------
Anders Sandberg
Towards Ascension!
asa@nada.kth.se
http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+
!y
------------------------------------------------------------------------------
Date: Tue, 29 Dec 1998 03:03:52 -0600
From: "Joe E. Dees" <jdees0@students.uwf.edu>
Subject: Re: Expanding forever...
You'd better hope for an eternally expanding universe, even with the
attendant dangers of a Pynchonian Callistoish entropic demise (I prefer
the frenetic complex energy of the lease-breaking party, myself).
The mail alternative, an eternally cycling Big Bang / Big Crunch cosmic
heartbeat (punctuated by the eternal recurrence of the Mother of all Singularities)
does not permit pattern, the sine qua non of consciousness, to pass its
superhot/supergravity Stygian microuniformchaosoup gate.
Joe
-------------------------------------------------------------------------------
Date: 29 Dec 1998 13:57:37 +0100
From: Anders Sandberg <asa@nada.kth.se>
Subject: Re: Expanding forever...
"Joe E. Dees" <jdees0@students.uwf.edu> writes:
> You'd better hope for an eternally expanding universe, even with the
> attendant dangers of a Pynchonian Callistoish entropic demise (I
> prefer the frenetic complex energy of the lease-breaking party,
> myself). The mail alternative, an eternally cycling Big Bang
/ Big
> Crunch cosmic heartbeat (punctuated by the eternal recurrence of
> the Mother of all Singularities) does not permit pattern, the sine
qua
> non of consciousness, to pass its superhot/supergravity Stygian
> microuniformchaosoup gate. Joe
Nice prose. But as far as I know the eternal cycle model has no support other than aesthetics; it is unclear if inflation or weird quantumgravity effects can reverse the contraction. The problem is also that you cannot avoid an Eternal Return in the cycling model since the amount of information that can be brought through the near-singularities is finite.
However, we can at our present technological level not do much about
the expansion/contraction of spacetime (just you wait until I and Mitch
get our hands on the Higgs-fields, topology change and
inflation! :-), so we can just analyse possible strategies in the different
scenarios. Which is an useful exercise to study the limits to intelligent
systems anyway.
-----------------------------------------------------------------------
Anders Sandberg
Towards Ascension!
asa@nada.kth.se
http://www.nada.kth.se/~asa/
GCS/M/S
Transhuman Mailing List
Mr. Kitchen,
Thank you for your insights which I find very helpful in more fully
understanding the concept of super intelligence
and brain uploading. However, I still question the value of
"abandoning our biological
bodies" as you so aptly described it.
Is it not true that nanotechnology will create a new type of hardware which will emulate biological systems in design and function? If this is true, than the natural electrochemical processes occuring within the human brain will be enhanced . The speed of natural electrochemical processes occuring in the human brain will be equivalent to the speed of a supercomputer (with the assistance of nanotechnology based on quantum computers).
Nanotechnology will allow us to create synthetic cells. Synthetic cells with built in quantum computers will allow us to think "a million times faster than we do now" without complete abandonment of the biological human body. In terms of super intelligence, it is obvious that sythetic cells with built in quantum computers integrated in human brain tissue will allow the input and output of the future internet directly into the human brain.
I fully agree that we need to accelerate and improve the learning curve.
We should not have to "spend 1/3 of our life in school in order to be productive
for the last 2/3", as you so aptly pointed out , Mr.
Kitchen.
As we all know, the economic systems and institutions of mankind are
currently experiencing FAILURE. I believe failure of the economic
systems and institutions of mankind will be the impetus to push civilization
rapidly toward TRANSHUMANISM early in the next century. Advancment
in technology is already creating job displacement and unemployment.
The more technological advancment that occurs - the greater the gulf between
the rich and poor. Low paying labor jobs that
inadequately meet the needs of people are plentiful. The choice of
mankind is either to live and grow OR Lay down and die.
Humans can learn at an accelerated rate with the help of sythetic cells with built in quantum computers. Once more, I do note see a need to completely abandon the humam body.
I do like the concept of possessing an universal body and having the ability to transfer into other types of systems -- systems that can adapt better to other types of environments. Obviously this ability will be extremely helpful for mankind to conquer space.
However, is there not a danger to become diluted and loose oneself?
It is for this reason, I believe it will be neccessary to retain ones biological
human body. The biological human body is a constant and
should never be replaced completely.
Mark
ark,
I have strong inclination to think that rather than sudden uploading into an entirely different physical substrate, the transition to superior hardware will be a somewhat smoother transition, based on the fact that nanotech will probably rely on carbon as its atom of choice, as our own bodies and brains do. Please see my page on this topic at:
http://www.aci.net/planetp/biotech.html
Paul Hughes
Date: Mon, 28 Dec 1998 21:11:25 +0100
From: christophe delriviere <darkmichet@bigfoot.com>
Subject: Re: >H Trouble in Digital heaven ??
Transhuman Mailing List
Mark wrote:
> However, is there not a danger to become diluted and loose oneself?
It
> is for this reason, I believe it will be neccessary to retain ones
> biological human body. The biological human body is a constant
and
> should never be replaced completely.
Why to be so fond of the current state of the human body ? Why to be
so affraid about loosing oneself? The human body is by definition loosely
defined ;). It's something in constant evolution, change, there is nothing
particular or magical in it. We consider only human body important because
we are "human bodies"... For myself, i consider that transhumanism is a
doctrine about the vastening of
the self by rational means but also should be a doctrine about the
vastening of the global system too (i repeat, i engage only myself in that).
Why to be locked by suboptimalization ?
Nanotechnology could certainly enhance human bodies in a rather "conservative" way but it could very certainly provide a lot more "interesting" topologies more easily. You can imagine the automatic generation of several kilometer wide optimized computation mega structure. Every part of such structures the size of a human brain could probably be far more optimized at thinking than a human brain. When you built such structure and you use it only to enhance a rather "primitive" human brain it's obviously the sub optimization of a system.
About loosing yourself, well... can you rationally define a concept of identity ? I believe our bodies are changing at every moment... every moment we are something else, there is certainly some kind of continuity in our transformation but does this continuity really matter ? Is not the concept of identity some kind of illusion ?
So for me transhumanism as an optimization of the self OK, but essentially because i want both local and global optimization according to the purposes "we" will dynamically generate. For me, a self is something defined by an abstract and arbitrary boundary. If i can't have a global optimization... well... too bad ;( i don't want to coerce peoples to act like i would. But i certainly would like to apply my principle of global/local optimization on my arbitrary defined self to the extent i'm willing to.
Of course i *believe* most transhumanists are as fond as you are of
their human bodies and don't care/want such drastic optimization of the
system. I pretty well understand, because i have also some important fears
in what i assert here and it's obviously natural because we as being were
evolved to enhance our survival capabilities. It is very strange and paradoxical
that the peoples the more able at this stage to make me doubt a lot about
my interest in global and drastic optimization are transhumanists. Probably
because they have thinked a lot on the subject ;) .Before knowing about
transhumanism and reading the discussions here and on the extropian mailing
list, i was certainly far less open minded in that subject ;). When i was
a child and having very very few SciFi and scientific knowledge, i was
dreaming to built huge robotic collonies with artificial intelligence on
the
other solar systems... of course now i know about the terminology,
Von Neumann probes and replicators, I've read about peoples like Hans Moravec
(I think is wife is christian if my memory is good) and Hugo de Garis....
So a deep question here ... what should prevail ? local or global optimization ? Could the list members help me to see it more clearly inside my thoughts and my belief system ?
Delriviere
Christophe
Dr. K. Eric Drexler (he prefers "Eric") is now with the foresight institute:
http://www.foresight.org
You may wish to look over his newer work. "Engines of Creation" was
a remarkable work, but the much newer "Unbounding the Future" is more realistic.
Both are available online for free at that site,
but it may be cheaper to purchase the paperback if you pay connect
time charges.
To answer your question, many transhumanists feel that there are many possible technologies that can possibly be used to achieve major augmentation of human intelligence. Different transhumanists are knowledgable or interested in different technologies, and therefore investigate them. Some of us feel that whatever augmentation occurs first will very rapidly lead to a superintelligence that will then be able to advance the remaining technologies (the "singularity" scenario.), while others believe in a more traditional and more gradual scenario. A bunch of us are software professionals and are therefore comfortable withthe uploading scenario.
I personally feel that some of the nanotech
problems will require a major increase in computing capacity, and
that this increase is likely to lead to an SI based on human-computer collaboration
before nanotech is achieved.
***
I also feel that the resulting SI will immediately complete the desing
and imple,entation of nanotech in order to build even omre intelligence.
I refer to myself as a radical singulatarian. Others may think of me as
a whacko :-)
*** This is a clasic description of how mind uploading,
the technological singularity and superintelligence is connected in Transhumanist
thinking.
Date: Sun, 17 Jan 1999 08:53:03 -0800
From: hal@rain.org
Subject: Re: Singularity: Just Say No!
Anders Sandberg, <asa@nada.kth.se>, writes:
> It is a bit ironic that borganisms are so often suggested, since
they
> appear to be harder to implement than individuals. In very complex
> individual the concept of self will likely be rather complex (in
some
> sense we are borganisms already, collective minds made up of
> semi-independent brain systems), but connecting minds evolved to
be
> individual in an useful way is likely rather complex; it is likely
> easier to extend them instead.
This is a good way to look at the mind: separate systems, some evolved earlier (the "reptilian mind"), some added later (the mammallian cortex).
Imagine reptilian minds looking forward with horror to a future where they were taken over, Suppressed, and dominated by a higher level mammallian mind. They might see this as an oppressive future in which they would lose their reptilian individuality. But actually, from our perspective as integrated minds, we see that the added layers give us more capability, more understanding, and a fuller experience.
Calls to "fight the future" have something of the same flavor. Are we so sure that our present minds have reached the peak of perfection that we should see any extensions to them as threats? What is the point of going forward if we have to keep our minds static?
I see Extropian philosophy as adopting the "embrace and extend" perspective towards mind enhancements. Yes, it may leave us with a new kind of mentality in which our current minds are just part of a much larger and richer whole. Some people view such a change as being so major that the person they are today has effectivelly died. But I think we have to accept that the future will bring major changes, and that we should adapt to them and welcome the new opportunities.
Hal