The Fabric of Reality David Deutch


particles, such a computation could easily, as I have explained, become


Download 1.42 Mb.
Pdf ko'rish
bet34/53
Sana18.06.2023
Hajmi1.42 Mb.
#1597749
1   ...   30   31   32   33   34   35   36   37   ...   53
Bog'liq
The Fabric of Reality


particles, such a computation could easily, as I have explained, become
‘intractable’. Yet since we could readily obtain its result just by performing
this experiment, it is not really intractable after all. We must now be more
careful with our terminology. Evidently there are computational tasks that are
‘intractable’ if we attempt to perform them using any existing computer, but
which would be tractable if we were to use quantum-mechanical objects as
special-purpose computers. (Notice that the fact that quantum phenomena
can be used to perform computations in this way depends on their not being
subject to chaos. If the outcome of computations were an inordinately
sensitive function of the initial state, ‘programming’ the device by setting it in
a suitable initial state would be an impossibly difficult task.)
Using a quantum auxiliary device in this way might be considered cheating,
since 
any environment is obviously much easier to render if one has access
to a spare copy of it to measure during the rendering! However, Feynman
conjectured that it would not be necessary to use a literal copy of the
environment being rendered: that it would be possible to find a much more
easily constructed auxiliary device whose interference properties were
nevertheless analogous to those of the target environment. Then a normal
computer could do the rest of the rendering, working through the analogy
between the auxiliary device and the target environment. And, Feynman
expected, that would be a tractable task. Furthermore, he conjectured,


correctly as it turned out, that all the quantum-mechanical properties of any
target environment could be simulated by auxiliary devices of a particular
type that he specified (namely an array of spinning atoms, each interacting
with its neighbours). He called the whole class of such devices a 
universal
quantum simulator.
But it was not a single machine, as it would have to be in order to qualify as
a universal computer. The interactions that the simulator’s atoms would have
to undergo could not be fixed once and for all, as in a universal computer,
but would have to be re-engineered for the simulation of each target
environment. But the point of universality is that it should be possible to
program single machine, specified once and for all, to perform any possible
computation, or render any physically possible environment. In 1985 I
proved that under quantum physics there is a 
universal quantum computer.
The proof was fairly straightforward. All I had to do was mimic Turing’s
constructions, but using quantum theory to define the underlying physics
instead of the classical mechanics that Turing had implicitly assumed. A
universal quantum computer could perform any computation that any other
quantum computer (or any Turing-type computer) could perform, and it could
render any finite physically possible environment in virtual reality. Moreover,
it has since been shown that the time and other resources that it would need
to do these things would not increase exponentially with the size or detail of
the environment being rendered, so the relevant computations would be
tractable by the standards of complexity theory.
The classical theory of computation, which was the unchallenged foundation
of computing for half a century, is now obsolete except, like the rest of
classical physics, as an approximation scheme. 
The theory of computation is
now the quantum theory of computation. I said that Turing had implicitly
used ‘classical mechanics’ in his construction. But with the benefit of
hindsight we can now see that even the classical theory of computation did
not fully conform to classical physics, and contained strong adumbrations of
quantum theory. It is no coincidence that the word 
bit, meaning the smallest
possible amount of information that a computer can manipulate, means
essentially the same as 
quantum, a discrete chunk. Discrete variables
(variables that cannot take a continuous range of values) are alien to
classical physics. For example, if a variable has only two possible values,
say 0 and 1, how does it ever get from 0 to 1? (I asked this question in
Chapter 2.) In classical physics it would have to jump discontinuously, which
is incompatible with how forces and motions work in classical mechanics. In
quantum physics, no discontinuous change is necessary — even though all
measurable quantities are discrete. It works as follows.
Let us start by imagining some parallel universes stacked like a pack of
cards, with the pack as a whole representing the multiverse. (Such a model,
in which the universes are arranged in a sequence, greatly understates the
complexity of the multiverse, but it suffices to illustrate my point here.) Now
let us alter the model to take account of the fact that the multiverse is not a
discrete set of universes but a continuum, and that not all the universes are
different. In fact, for each universe that is present there is also a continuum
of identical universes present, comprising a certain tiny but non-zero
proportion of the multiverse. In our model, this proportion may be
represented by the thickness of a card, where each card now represents all


the universes of a given type. However, unlike the thickness of a card, the
proportion of each type of universe changes with time under quantum-
mechanical laws of motion. Consequently, the proportion of universes having
a given property also changes, and it changes continuously. In the case of a
discrete variable changing from 0 to 1, suppose that the variable has the
value 0 in all universes before the change begins, and that after the change,
it has the value 1 in all universes. During the change, the proportion of
universes in which the value is 0 falls smoothly from 100 per cent to zero,
and the proportion in which the value is 1 rises correspondingly from zero to
100 per cent. Figure 9.4 shows a multiverse view of such a change.
It might seem from Figure 9.4 that, although the transition from 0 to 1 is
objectively continuous from the multiverse perspective, it remains
subjectively discontinuous from the point of view of any individual universe
— as represented, say, by a horizontal line halfway up Figure 9.4. However,
that is merely a limitation of the diagram, and not a real feature of what is
happening. Although the diagram makes it seem that there is at each instant
a particular universe that ‘has just changed’ from 0 to 1 because it has just
‘crossed the boundary’, that is not really so. It cannot be, because such a
universe is strictly identical with every other universe in which the bit has
value 1 at that time. So if the inhabitants of one of them were experiencing a
discontinuous change, then so would the inhabitants of all the others.
Therefore none of them can have such an experience. Note also that, as I
shall explain in Chapter 11, the idea of anything 
moving across a diagram
such as Figure 9.4, in which time is already represented, is simply a mistake.
At each instant the bit has value 1 in a certain proportion of universes and 0
in another. All those universes, at all those times, are already shown in
Figure 9.4. They are not moving anywhere!
FIGURE 9.4 
Multiverse view of how a bit changes continuously from 0 to 1.
Another way in which quantum physics is implicit in classical computation is
that all practical implementations of Turing-type computers rely on such
things as solid matter or magnetized materials, which could not exist in the
absence of quantum-mechanical effects. For example, any solid body
consists of an array of atoms, which are themselves composed of electrically


charged particles (electrons, and protons in the nuclei). But because of
classical chaos, no array of charged particles could be stable under classical
laws of motion. The positively and negatively charged particles would simply
move out of position and crash into each other, and the structure would
disintegrate. It is only the strong quantum interference between the various
paths taken by charged particles in parallel universes that prevents such
catastrophes and makes solid matter possible.
Building a universal quantum computer is well beyond present technology.
As I have said, detecting an interference phenomenon always involves
setting up an appropriate interaction between 
all the variables that have
been different in the universes that contribute to the interference. The more
interacting particles are involved, therefore, the harder it tends to be to
engineer the interaction that would display the interference — that is, the
result of the computation. Among the many technical difficulties of working at
the level of a single atom or single electron, one of the most important is that
of preventing the environment from being affected by the different interfering
sub-computations. For if a group of atoms is undergoing an interference
phenomenon, and they differentially affect other atoms in the environment,
then the interference can no longer be detected by measurements of the
original group alone, and the group is no longer performing any useful
quantum computation. This is called 
decoherence. I must add that this
problem is often presented the wrong way round: we are told that ‘quantum
interference is a very delicate process, and must be shielded from all outside
influences’. This is wrong. Outside influences could cause minor
imperfections, but it is the effect of the quantum computation on the outside
world that causes decoherence.
Thus the race is on to engineer sub-microscopic systems in which
information-carrying variables interact among themselves, but affect their
environment as little as possible. Another novel simplification, unique to the
quantum theory of computation, partly offsets the difficulties caused by
decoherence. It turns out that, unlike classical computation, where one
needs to engineer specific classical logic elements such as 
and, or and not,
the precise form of the interactions hardly matters in the quantum case.
Virtually any atomic-scale system of interacting bits, so long as it does not
decohere, could be made to perform useful quantum computations.
Interference phenomena involving vast numbers of particles, such I as
superconductivity and superfluidity, are known, but it seems that none of
them can be used to perform any interesting computations. At the time of
writing, only single-bit quantum computations can be easily performed in the
laboratory. Experimentalists are confident, however, that two- and higher-bit
quantum gates (the quantum equivalent of the classical logical elements) will
be constructed within the next few years. These are the basic components of
quantum computers. Some physicists, notably Rolf Landauer of IBM
Research, are pessimistic about the prospects for further advances after
that. They believe that decoherence will never be reduced to the point where
more than a few consecutive quantum-computational steps can be
performed. Most researcher in the field are much more optimistic (though
perhaps that is because only optimistic researchers choose to work on
quantum computation!). Some special-purpose quantum computers have
already been built (see below), and my own opinion is that more complex


ones will appear in a matter of years rather than decades. As for the
universal quantum computer, I expect that its construction too is only a
matter of time, though I should not like to predict whether that time will be
decades or centuries.
The fact that the repertoire of the universal quantum computer contains
environments whose rendering is classically intractable implies that new
classes of purely mathematical computations must have become tractable
too. For the laws of physics are, as Galileo said, expressed in mathematical
language, and rendering an environment is tantamount to evaluating certain
mathematical functions. And indeed, many mathematical tasks have now
been discovered which could be efficiently performed by quantum
computation where all known classical methods are intractable. The most
spectacular of these is the task of factorizing large numbers. The method,
known as 
Shor’s algorithm, was discovered in 1994 by Peter Shor of Bell
Laboratories. (While this book was in proof further spectacular quantum
algorithms have been discovered, including 
Graver’s algorithm for searching
long lists very rapidly.)
Shor’s algorithm is extraordinarily simple and requires far more modest
hardware than would be needed for a universal quantum computer. It is
likely, therefore, that a 
quantum factorization engine will be built long before
the full range of quantum computations is technologically feasible. This is a
prospect of great significance for 
cryptography (the science of securely
communicating and authenticating information). Realistic communication
networks may be global and have large, constantly changing sets of
Download 1.42 Mb.

Do'stlaringiz bilan baham:
1   ...   30   31   32   33   34   35   36   37   ...   53




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling