|
|
|
 |
Communicating
in Words |
 |
|
|
|

A
Monthly Podcast On Cognitive Science
Semiotics (also
known as semiology) is the study of signs and their meanings.
Semioticians define a relationship between a perceptible
element, known as the signifier, and the signified—the
meaning given to this signifier within a code. Semioticians
also distinguish between signs and indexes.
For instance, smoke is an index of fire, and not a sign,
because it is simply a natural consequence of the fire. A
sign, in contrast, is something used intentionally to convey
some meaning.
In addition to indexes, Charles Sander Peirce, one of the fathers
of semiotics, defines two types of signs. Icons refer
to the objects that they signify through the resemblance that
they bear to them (for example, a photo or drawing of an object
is an icon for that object). Symbols refer
to the objects that they signify through cultural conventions
(for example, scales as a symbol of justice). |
It was long believed
that Neanderthal man could not communicate
verbally—that Neanderthals must have had some primitive
form of language, but could not produce the complete range
of sounds of human language. According to a hypothesis advanced
by American linguist Philip Lieberman, Neanderthals’ larynxes
had not yet descended so low as those of Homo sapiens,
so they would have had a great deal of difficulty in pronouncing
the three main vowels present in the majority of the world’s
languages (ee as in “beet”, oo as “boot” and
a as in “aha!”).
However, some authors argue that to speak a rudimentary language,
one need not master all of the vowels, so long as the language
has a sufficient number of consonants.
Moreover, recent research has raised
questions about Lieberman’s hypothesis. Many researchers
find it hard to believe that Neanderthals, who produced sophisticated
tools, adorned their bodies with bracelets and necklaces,
buried their dead, and produced works of art, had little
or no ability to communicate verbally.
Some authors even believe that the
skull on which Lieberman based his work was not truly representative
of Neanderthal man. Contrary to his findings, reconstructions
of other Neanderthal skulls have shown that their base would
have allowed the existence of a vocal tract very similar
to that of modern humans. For example, the discovery in 1989
of the 60 000-year-old skull of a male Neanderthal with
a hyoid bone (the bone that supports the larynx) even led
some researchers to say that he had probably been able to
speak.
One thing is certain: Neanderthals disappeared about 28 000
years ago, leaving the Earth to their rivals, Homo sapiens
sapiens, who had everything they needed to use an articulate
symbolic language with elaborate syntax.
|
|
|
There
are many theories about the origins of language, and the
dates cited for its first appearance vary greatly from one author
to another. They range from the time of Cro-Magnon man, about
40 000 years ago, to the time of Homo habilis, about
2 million years back. Another highly controversial issue is whether
language emerged at
several different locations in the world (the theory of polygenism)
or at only one (the theory of monogenism).
Among the theorists of monogenism, two major
schools of thought can be distinguished. The first, influenced
by Chomskyian theories in the broad sense (follow the
Tool module to the left), starts from the premise that the human
species as we know it arose from an unlikely genetic mutation that
occurred about 100 000 years ago, in which certain of the brain’s
circuits were reorganized. This reorganization would have given
rise to the human “language instinct”, thus paving
the way for the explosive growth in all the cognitive abilities
that the powerful communication tool of language provides. In this
view, language is an innate component of human life, which is why
it should be possible to identify and describe a “universal
grammar”, and why it is so hard to imagine an intermediate
form of language that could function without all the grammatical
structures found in languages today.
This view of the origins of language has
been criticized as anti-evolutionist, but several renowned scholars
of evolution have lent it their support. Paleoanthropologist Ian
Tattersall, for example, writes that Homo sapiens
sapiens “is not simply an improved version of its ancestors—it’s
a new concept, qualitatively distinct from them”. For Tattersall
and many other scientists, the mechanism that gave rise to language
involved the relatively sudden combination of pre-existing elements
that had not been selected specifically to produce this attribute
but that, together, made it possible. Such a mechanism is thought
to have come into play many times in the course of evolution; the
paleontologist Stephen Jay Gould calls it exaptation,
and the features that result from it, such as language, he calls “spandrels ”.
Like Noam Chomsky, Gould
also believes that human language is so different from anything
else in the animal kingdom that he does not see how it could have
developed from ancestral cries or gestures, but he can imagine
its having emerged as a side effect of the explosive growth of
human cognitive abilities.
The second major school of monogenism posits
a concept of the evolution of Homo sapiens in which language
developed from cognitive faculties that were already well established.
In this view, the birth of language was triggered not by a random
mutation, but simply by the availability of an increasingly powerful
cognitive tool. Bit by bit, those groups of hominids who developed
an articulate language that let them discuss past and imaginary
events would thereby have supplanted those groups that as yet had
only a proto-language.
This second school of monogenism is identified with the linguist Steven
Pinker, who believes that language may very well have
been the target that evolution was aiming for. He argues that the
brain has a general capacity for language—a concept often
associated with connectionist theory in cognitive science. Pinker
invokes the Baldwin effect, for example, as a major evolutionary
force that could have led to modern language (see box below). The
ability to learn language would therefore have become a target
of natural selection, thus permitting the selection of language-acquisition
devices that were genetically pre-wired into the brain’s
circuits.

The Tower of Babel (1604),
by Abel Grimmer (1570-1619).
According to the Biblical story of the Tower of Babel, everyone
originally spoke the same language, but then God changed things
so that everyone spoke different languages. As a result, the
tower, which was supposed to reach to the heavens, was never
completed, because the people building it could no longer understand
one another.
This theory of monogenism also
implies intermediate forms of language that eventually led to our
own. For example, Derek Bickerton, a linguist
renowned for his work on the evolution of language, suggests that
human language abilities evolved in two stages. In the first, humans
would have used a proto-language of symbolic representations that
took the concrete form of vocal and/or gestural signs. This stage
might have lasted nearly 2 million years. Then, about 50 000
years ago, humans would have developed a more formal syntax that
let them exchange ideas with significantly more precision and clarity.
With syntax, people could not only label things (“leopard
print”, “danger”, etc.), but also join several
labels together to express even more meaning (“When you see
a leopard print, watch out!”).
Thus, if symbolic representations,
already present in the proto-languages, made the construction of
the first mental models of reality possible, it was the emergence
of syntax that gave human language the great richness that it has
today. To give some idea of how the transition from symbolic representations
to syntax may have occurred, Bickerton cites the example of the
pidgin languages of the colonial period. These rudimentary languages
were developed by people of different cultural origins who needed
to communicate (see box below). Though the pidgin languages themselves
had no grammar at all, when they were learned by a second generation,
they became what are known as creoles: new, grammatical languages
derived from multiple mother tongues.
Another important scholar of
the origins of language, anthropologist Terrence Deacon,
takes exception to the primacy of grammar, believing instead that
the essential feature of language is its use of symbols. According
to Deacon, the so-called symbols that some authors say animals
use are actually only indexes (see sidebar). He says that people
who try to teach language to chimpanzees always ensure that the
things designated by the words or icons being taught are present
in the animal’s environment, which makes these words or icons
mere indexes. Deacon associates this inferior level of language,
based on signs and icons, with that
used by children in their earliest years. In contrast, says
Deacon, articulate adult language depends on the specificity of
the symbols, which in turn depends on the logical connections that
each symbol in a language has with the others. For Deacon, it is
this network of relationships, far more than the mere occurrence
of arbitrary signs, that characterizes the symbols used by human
beings.
Deacon therefore thinks that we must try to understand the evolution
of language not in terms of innate grammatical functions, but rather
in terms of the manipulation of symbols and of relationships among
symbols. There is certainly a human predisposition for language,
but this predisposition would be the result of the co-evolution
of the brain and of language. What is innate, according to Deacon,
is a set of mental abilities that give us certain natural tendencies,
which are expressed in the same universal language structures.
Thus Deacon offers a different concept from Chomsky, who associates
the origins of universal grammar with a language-specific innovation
in the brain.
Deacon sees this co-evolution
of the brain and language as being rooted in the complexity of
humans’ social lives, which involved not only a high degree
of co-operation between the men and women of a community to acquire
resources, but also exclusive monogamous relationships to ensure
proper care for very young children who were greatly dependent
on adults. This highly explosive mixture is not found in any other
species (the great apes, for example, gather their food individually).
To ensure the stability of the group, rituals and restrictions
were required: in other words, abstractions that could be comprehended
only if the individuals involved could understand and use symbols.
A pidgin is
a language created spontaneously from a mixture of several
languages, so that the people who speak them can communicate.
The people who develop a pidgin language agree on a limited
vocabulary and employ only a rudimentary grammar. For example,
in Franco-Vietnamese pidgin, this results in sentences such
as “Moi faim. Moi tasse. Lui aver permission repos.
Demain moi retour campagne.” [Me hunger. Me lie down.
He have permission rest. Tomorrow me return country.]
The first documented pidgin, the Lingua Franca, was used by
Mediterranean merchants in the Middle Ages. Another well known
pidgin was developed from a mixture of Chinese, English, and
Portuguese to facilitate trade in Canton, China during the
18th and 19th centuries. Another classic example is the pidgin
developed by slaves in the Caribbean, whose cultural origins
were too diverse for their own languages to survive after their
forced transplantation.
Children who grow up together and learn a pidgin tend to spontaneously
impose a lexical structure on it to create a creole: a true
language whose vocabulary comes from other languages. But this
does not happen with all pidgins, and some are lost or become
obsolete.
According to researchers such as Derek Bickerton, people who
find themselves in the particular circumstances described above
revert to an older form of communication, what Bickerton calls
a proto-language, of which pidgin would be the modern manifestation.
|
In 1896, American psychologist
James Mark Baldwin proposed an evolutionary mechanism that
soon came to be known as the “Baldwin effect”.
It is a process whereby a behaviour that originally had to
be learned can eventually become innate, that is, fixed in
the genetic programming of the species concerned. The effectiveness
of the learning plays a key role in the Baldwin effect, which
distinguishes it from Lamarckian inheritance of acquired
characteristics.
The idea behind the Baldwin effect is that individuals who
are able to learn a given kind of behaviour more effectively
may over the course of their lives acquire advantages that
individuals whose brains are less plastic will not. Natural
selection will therefore tend to favour those who always learn
faster until, at some point in evolution, the behaviour will
no longer need to be learned at all: it will have become instinctive.
It should be noted that the Baldwin
effect assumes that the environment remains relatively stable,
because if it changed too much, there would be no selection
against plasticity, which would become an important adaptive
factor. But if the environment remains stable for a long
time, natural selection may favour a mutation that makes
the behaviour innate and hence more robust and efficient.
The Baldwin effect, as an evolutionary
mechanism that targets learning abilities, has been successfully
simulated with many computer programs. Many scientists believe
that it may have played a decisive role in the evolution
of language.
|
|
|