Minds and Computers : An Introduction to the Philosophy of Artificial Intelligence


partially tokened, then the categories they represent can admit of


Download 1.05 Mb.
Pdf ko'rish
bet79/94
Sana01.11.2023
Hajmi1.05 Mb.
#1737946
1   ...   75   76   77   78   79   80   81   82   ...   94
Bog'liq
document (2)


partially tokened, then the categories they represent can admit of
imprecise borders and internal structure. Furthermore, if the content
of representations is contextually modulated, then the extension of
the category will be contextually sensitive.
In other words, if representations are distributed and contextually
modulated, then the categories they represent are such that there can
be borderline cases of membership, the borders can shift contextually
and there can be graded membership admitting of better and worse
cases.
To recap, advocates of distributed representation take the content
conferring mechanism on representations to be essentially mediated
by relations with other representations, the categories they represent
to be contextually sensitive – allowing imprecise and shifting borders
and internal structure – and the composition of mental representa-
tion to be the complex, contextually modulated interaction of pat-
terns of activation in a highly interconnected network.
18.4 COGNITIVE ARCHITECTURE
So far in this chapter I’ve discussed two distinct views of mental rep-
resentation and used this distinction as an entryway into understand-
ing the competing symbolic and connectionist paradigms in artificial
intelligence research.
These di
ffering views concerning mental representation are of
central importance in distinguishing between the two paradigms but

185


they do not exhaust the di
fferences between them. Connectionists
also di
ffer from their symbolic counterparts with respect to views
concerning cognitive architecture.
The term cognitive architecture refers to the structure and nature of
the information processing systems of a cognitive agent. In other
words, the term refers to the organisational and implementational
features of the computational hardware which facilitates cognition.
The symbolic tradition in artificial intelligence research sees the
cognitive architecture of the human mind as a physical symbol system.
Connectionists, on the other hand, view human cognitive architecture
in terms of connectionist networks which facilitate parallel distributed
processing.
In previous chapters we’ve seen numerous examples of how we might
implement cognitive functions with symbol systems. Connectionist net-
works, as we will see in the following chapter, are particularly well suited
to carrying out functions that are notoriously di
fficult to implement in
symbol systems architecture.
To the extent that connectionist architecture is readily amenable to
implementing functions which we take to be importantly constitutive
of cognition and which prove problematic to implement with symbol
systems, we have at least one reason for preferring a connectionist
approach over a symbolic approach to artificial intelligence.
The following chapter will be devoted to making clear the concepts
which have so far only been mentioned with little in the way of expla-
nation. After explaining these concepts and exemplifying the oper-
ations of connectionist networks with numerous examples, we will
then return to further discuss the relation between the symbolic and
the connectionist paradigms.
186
  


C H A P T E R 1 9
ARTIFICIAL NEURAL
NETWORKS
The connectionist paradigm in artificial intelligence research rose to
prominence in the last two decades of the twentieth century. Artificial
neural networks were shown to be quite e
fficacious in modelling
certain cognitive phenomena that had been problematic to implement
with symbolic computational architecture.
The operations of artificial neural networks are designed to mimic
the neural circuitry of the brain – they are often referred to as imple-
menting ‘brain style’ processing. As such, it may aid your under-
standing of this chapter to first revisit the discussion of the operations
of neurons in Chapter 4.
In this chapter we are going to develop a sound understanding of
the operations of artificial neural networks and their utility in mod-
elling cognitive functions. We’ll begin by describing the basic connec-
tionist architecture and explaining how this di
ffers from symbolic
computational architecture.
19.1 CONNECTIONIST ARCHITECTURE
Classical symbolic computational architecture – which we described
at length in Chapters 7 to 9 and have seen many examples of since –
admits of the following essential features.
Firstly, there is only one processor in the architecture – a central pro-
cessing unit (CPU) which processes program instructions. Secondly,
the CPU carries out these instructions serially – one after the other.
Thirdly, the CPU addresses and operates on localised register contents.
Connectionist architecture, on the other hand, is crucially distinct
with respect to each of these features. Connectionist networks are
composed of a (typically large) number of simple processing units
(nodes) which operate in parallel rather than serially. Content in con-
nectionist networks is not local and addressable, but distributed
across numerous nodes and encoded as a pattern of connections.
187


The basic elements of an artificial neural network are simple pro-
cessing units which are designed to emulate the operations of indi-
vidual neurons. These units are functionally organised in layers –
there will be an input layer of nodes and an output layer of nodes.
There will typically also be a ‘hidden’ layer of nodes – these are
neither input nor output units but serve to mediate between these
layers.
As you have no doubt determined, nodes are connected to each
other. Precisely how they are interconnected defines various architec-
tural variations which needn’t concern us much here. In networks of
interesting complexity, each node will be connected to a large number
of other nodes – just as individual neurons are connected to large
numbers of other neurons. The simplest type of connectionist archi-
tecture (or the most complex depending on how you look at it) is such
that every node is connected to every other node in the network.
Information processing in artificial neural networks is achieved
through the propagation of activation along the connections through
the network. Each node in the network has a level of activation which
is influenced by the activation it receives from other nodes which are
connected to it.
We’re going to make some simplifying assumptions here about
activation. Firstly, we’re going to assume that at each time step, the
activation of a node is entirely determined by the activation it
receives along its incoming (a
fferent) connections (rather than
consider a more complicated function which also takes into account
the antecedent level of activation of the node from the previous
time step).
Connections between nodes can be either excitatory or inhibitory
and this is represented by assigning a weight – a positive or negative
numerical value – to each connection. Excitatory connections – which
are positively weighted – will excite (increase the level of activation
of) the node they are connected to. Inhibitory connections – which are
negatively weighted – will inhibit (decrease the level of activation of)
the node to which they are connected.
Each node in the network, you will recall, is a simple processing
unit. These nodes implement two functions – an activation function
and a transfer function.
The activation function determines whether or not a node will fire
based on its level of activation at that time step. We’re only going to
consider the simplest of activation functions – a threshold function.
Nodes with a threshold activation function will fire i
ff their level of
activation at that time step is above some threshold value assigned to
188
  


the node. If a node fires, it passes activation along each of its outgo-
ing (e
fferent) connections to other nodes, otherwise no activation
propagates through that node.
The transfer function determines how a node updates its level of
activation based on the activation it receives along its a
fferent con-
nections. Again, we’re only going to consider the simplest of transfer
functions – a weighted sum function. To determine the level of acti-
vation of a node with a weighted sum transfer function, we simply
take the sum of the values of the a
fferent connection weights.
19.2 SIMPLE ARTIFICIAL NEURAL NETWORKS
Let’s take a look at some basic examples to exemplify these oper-
ations. To keep things simple, I’m going to use integers for connection
weights and threshold values. Figure 19.1 depicts the simplest artifi-
cial neural network that does something interesting.
This network has two input nodes (A and B) and one output node
(C). We’re interested in whether or not the output node will fire
(although its e
fferent connection is not afferent to any other node).
The input nodes we can imagine as detectors of some kind. They are
set to fire if some environmental condition is met – perhaps if a light
is on or if a switch is in a particular position.
The two connections in the network are both excitatory and
equally weighted. If A fires it excites C and if B fires it excites C. The
  
189

Download 1.05 Mb.

Do'stlaringiz bilan baham:
1   ...   75   76   77   78   79   80   81   82   ...   94




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling