Minds and Computers : An Introduction to the Philosophy of Artificial Intelligence


particular content, as the causal determinants of action will be held


Download 1.05 Mb.
Pdf ko'rish
bet49/94
Sana01.11.2023
Hajmi1.05 Mb.
#1737946
1   ...   45   46   47   48   49   50   51   52   ...   94
Bog'liq
document (2)


particular content, as the causal determinants of action will be held
to be complexly interrelated with the values of many variables (the
contents of many beliefs, desires, etc.). We will have a lot more to say
about the idea of mental content in Chapter 18.
This line of argument does not speak directly to the fact that minds
di
ffer also in terms of capacities. For instance, some minds are more
amenable to formal reasoning, some minds are more skilled at
employing language, some minds are skilled in working with engin-
eered artefacts, and some minds are capable of producing extraord-
inary works of art. This leads us to our next objection.
10.4 LEARNING
Minds can learn. Consequently, di
fferent minds can do different
things. In other words, some minds can perform functions which
other minds cannot (or can perform certain functions better than
most other minds). How can computationalism account for this?
We have seen how computationalism can account for variation
between minds with respect to their content; however, minds also vary
with respect to capacities. This presents the computationalist with
two further challenges. Firstly, she must give a computational account
of the acquisition of new capacities, i.e. specify algorithms which
govern learning.
Ideally, we would like a specification of the algorithm(s) (or vari-
eties of algorithms) which actually do govern learning in human
minds. However, for the purposes of defending computationalism
against this objection, it su
ffices to show that learning is in principle
e
ffective, in which case the specification of any correctly functioning
algorithm(s) will su
ffice.
Secondly, the computationalist needs to explain how there can be
functional equivalence between two systems with di
fferent capacities,

103


i.e. how it is that two isomorphisms of [MIND] can do di
fferent
things.
The latter challenge is the more fundamental so I will respond to
this first. Let’s reflect on what it is for two formal systems to be iso-
morphic to each other. In section 7.6 we said that a formal system [A]
is isomorphic to a formal system [B] i
ff we can derive [B] from [A]
through uniform substitution of symbols. So if, for instance, we take
a chess board in some configuration and replace all the pawns with
identical coins, then the result is another isomorphism of the same
formal system.
Further, if we have two chess boards and we play one for a number
of moves, then we will have two isomorphisms of the same formal
system in di
fferent states. A move might then be available on one
board – e.g. castling – which is not available on the other board.
The claim that two formal systems [A] and [B] are isomorphic
amounts to the following: for all states and rules R, if the applic-
ation of to ([A] in state x) generates state y, then the application of
to ([B] in state x) will generate state y.
Understanding that isomorphic formal systems in di
fferent states
may yield di
fferent outputs given the same sequence of rule applic-
ations allows for a straightforward explanation of variation among
minds with respect to their response to a particular situation. If two
minds have di
fferent beliefs, desires, etcthen their [MIND]s are in
di
fferent states, hence we should not expect the same rule application
in both [MIND]s would result in the same output.
We have also gone some way towards understanding variation
among minds with respect to capacities – di
fferent rules might be
applicable to isomorphisms of [MIND] in di
fferent states. There is
more to be said here, however.
Consider the system [UM] from Chapter 9. Recall that [UM] oper-
ates by decoding the value in R
1
and running the program that the
value codes (if it codes a program at all). Now let’s consider a similar
register machine – we’ll call it [OS].
[OS] will have a large number of registers set aside in which to store
its own values. It will also have a large number of registers available
in which to do computations on these values. Some of the values
stored by [OS] will be codes of algorithms (programs). These can be
executed, using other values of [OS] as program input, in the space set
aside for computations and their output can then be stored as further
values of [OS].
So, for instance, one of the registers of [OS] might contain #[ADD]
(the code of the program [ADD]). At some point in its operation, the
104
  


program [OS] might address this register and instantiate [ADD] using
the values of two other registers as input, and store the output in
another register.
Now, let us suppose that the program [OS] governs the operations
of many algorithms (stored as values in its registers), that many of
these may be in operation at any given time, that the output of algo-
rithms can be employed as the input of other algorithms, and that the
output of algorithms can determine which algorithm will be executed
next (and with which values).
The program [OS], while it may sound more complicated than
[UM], is clearly e
ffective. When we speak of many algorithms being
in operation at the same time, what we mean is that many algorithms
are in process; however, only one step of one algorithm will be carried
out at each time step. [OS] is therefore a register machine program like
any other and, hence, is computable by [UM].
The system [MIND] can be understood as a very complex version
of [OS]. It functions by governing the operations of a large number
of algorithms which individually perform mental functions – algo-
rithms which transform sensory data into perceptual representations,
algorithms which govern bodily movements, algorithms which govern
linguistic production and comprehension, algorithms which deter-
mine actions based on beliefs, desires and the like, and so on.
To employ the software analogy, [MIND] is best understood as a
kind of operating system which manages the highly interrelated
operations of a large number of applications and controls the hard-
ware in which it is instantiated.
Some of the algorithms which [MIND] employs serve as learning

Download 1.05 Mb.

Do'stlaringiz bilan baham:
1   ...   45   46   47   48   49   50   51   52   ...   94




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling