When you hear a sound, your brain
first automatically determines whether it comes from a human voice or from some
other source. Next, if the source is a human voice, your brain decides whether
the sound is a syllable or not, and lastly, whether it is a real word or a pseudo-word
(a group of sounds that has no meaning). By capturing functional brain images
while subjects listened to these various sounds, researchers have been able to
distinguish between those areas of the brain that are involved in simply listening
to sounds and those areas that are involved in understanding them.
Though the terms Broca’s area
and Wernicke’s area are in common use, it is important to remember that
the boundaries of these areas are not clearly defined and may vary from one individual
to another. It should also be noted that these areas may also be involved in functions
other than language.
Originally thought of as a “centre for language”,
Broca’s area is now regarded more as one part of a complex network involved
in semantic, syntactic, and phonological processing, and even in tasks not related
to language (such as movement).
This new concept suggests that subdivisions
of this area may exist, or even that it can really be defined only
in more abstract terms. Thus Broca’s area may eventually come to be seen
as a historical concept that has no true anatomical or functional correlate.
While performing brain operations
without general anaesthesia, neurosurgeons such as Wilder Penfield and George
Ojemann discovered that applying electrical stimuli directly to the areas of the
cortex that are involved in language can disrupt language functions in certain
very specific ways. If, for example, while a subject is speaking, a weak stimulus
is applied to a part of the left hemisphere that corresponds to Broca’s
area, the subject will begin to show hesitations in his or her speech. But if
a strong enough stimulus is applied to the same area, the person becomes unable
to speak at all.
Curiously, stimulating sites that are close to Broca’s
area sometimes produces different effects, while stimulating sites that are farther
away may produce similar ones. This observation has led many researchers to assert
that the areas of the brain that are involved in language are probably far
more complex than the Geschwind-Wernicke model proposes.
BROCA’S AREA , WERNICKE’S AREA, AND OTHER
LANGUAGE-PROCESSING AREAS IN THE BRAIN
This region is the inferior parietal lobule, also known
as “Geschwind’s territory”, in honour of the American neurologist
Norman Geschwind, who foresaw its importance as early as the 1960s. Brain imaging
studies have now shown that the inferior parietal lobule (angular gyrus and supramarginal
gyrus) is connected by large bundles of nerve fibres to both Broca’s area
and Wernicke’s area. Information might therefore travel between these last
two areas either directly, via the arcuate fasciculus, or by a second, parallel
route that passes through the inferior parietal lobule.
The
inferior parietal lobule of the left hemisphere lies at a key location in the
brain, at the junction of the auditory, visual, and somatosensory cortexes, with
which it is massively connected. In addition, the neurons in this lobule have
the particularity of being multimodal, which means that they can process different
kinds of stimuli (auditory, visual, sensorimotor, etc.) simultaneously. This combination
of traits makes the inferior parietal lobule an ideal candidate for apprehending
the multiple properties of spoken and written words: their sound, their appearance,
their function, etc. This lobule may thus help the brain to classify and label
things, which is a prerequisite for forming concepts and thinking abstractly.
The
inferior parietal lobule is one of the last structures of the human brain to have
developed in the course of evolution.
This structure appears to exist in rudimentary form in the brains of other primates,
which indicates that language
may have evolved through changes in existing neural networks, rather than
through the emergence of completely new structures in the brain.
The
inferior parietal lobule is also one of the last structures to mature in human
children, and there are reasons to believe that it may play a key role in the
acquisition of language. The late maturation of this structure would explain,
among other things, why children cannot begin to read
and write until they are 5 or 6 years old.
In most people, the right hemisphere
is not the dominant hemisphere for language. But even then, it is still involved
in understanding simple words, short phrases, metaphorical language, and prosody.
When the left hemisphere is damaged, the right hemisphere
can play an even more important role in language. One such case involves young
children who suffer from frequent epileptic seizures that do not respond well
to medication and that compromise their cognitive development.
If
the focus of these seizures lies in only one hemisphere, but they affect both
hemispheres, an operation called a hemispherectomy may be performed, in
which removing a large portion of the diseased hemisphere successfully controls
the seizures. If the hemisphere removed is the one that was dominant for language
in this child (usually the left), and the child is very young, the right hemisphere
will take over its language functions almost perfectly.
This phenomenon
suggests that the right hemisphere has what it takes to handle the main functions
of language. These operations have also demonstrated that the plasticity
of the right hemisphere persists beyond what is generally considered the critical
period for language acquisition .
The model first proposed by Geschwind
has undergone many modifications since. For example, we now know that when someone
reads a word, their brain does not have to convert it into a pseudo-auditory response
before they can pronounce it. In fact, the visual information from the printed
word seems to be transmitted from the visual cortex to Broca’s area without
having to pass through the angular gyrus.
Blind people use their fingers to
read characters in the form of the raised dots of Braille. We now know that in
people who are blind from birth, it is the visual cortex that takes charge of
the information received through the fingers when they are reading Braille, despite
the total absence of any visual input.
While reading
requires the co-ordinated operation of the brain’s visual
system and its linguistic system, writing is based
on co-ordination of its linguistic system and its motor system, which controls
the precise
gestures involved in writing.
The terms “alexia”
and “agraphia”
refer to the inabilities to read and to write, respectively. These two
language
disorders seem to depend on two specialized systems that are
distinct and independent.
Another condition, called
“anomia”,
consists in a difficulty in retrieving the names of objects. It is often observed
following damage to the angular gyrus.
MODELS OF SPOKEN AND WRITTEN LANGUAGE FUNCTIONS IN THE
BRAIN
The
earliest beginnings of spoken language in human beings go back perhaps 2 million
years and were accompanied by changes in the brain that enabled it to process
such language more effectively. This was not the case for written
language, which goes back scarcely 4000 years. We may therefore say that humans
are biologically designed to speak, but not to read and write.
As with
so many other functions, the development of language in children follows the same
pattern as the development of language in their species: children learn to read
and write several years after they have mastered spoken language. The system of
graphic language symbols is thus added onto the system of phonological language
symbols that is already in place.
According to the
Geschwind-Wernicke model, when one person hears another speak a word, it is
perceived first in the auditory cortex, then passed on to Wernicke’s area.
In contrast, according to this model, when someone reads a word,
it reaches the brain via the eyes rather than the ears. Consequently, it is perceived
first, as a graphic pattern, by the primary
visual cortex, which passes it on to the angular
gyrus. There, at the junction of the temporal, occipital, and parietal lobes,
the spelling of the word is deciphered. The neurons of the angular gyrus are also
especially well placed to categorize, conceptualize, and draw connections among
various characteristics of an object. The angular gyrus would thus be directly
involved when you assign a name to an object or when you read its name. The angular
gyrus is also more active when you are retrieving the meaning of a word or holding
it in memory for a short time.
From the angular gyrus, the information is then
passed on to the adjacent region, Wernicke’s area, where it is recognized
as a word associated with its corresponding auditory form. From there, regardless
of whether the word has been heard or read, the message travels to Broca’s
area, which adds a syntactic structure and an articulation plan. This rich, complex
information is then transferred to the motor
cortex, adjacent to Broca’s area. The pyramidal neurons of the motor
cortex then send their signals to the muscles of the mouth and larynx that produce
the spoken word.
Recent studies have shown that at least two distinct
neural systems may be involved in reading. According to this theory, the brain
reads mainly by translating written characters into the corresponding phonological
elements of spoken language, but also by drawing connections between the complete
images of written words and their meanings. This latter recall process may thus
in a sense short-circuit the process of drawing connections between words’
phonological signatures and their meanings.
Be that as it may, reading
is a very rapid process. The brain is estimated to have only a few tenths of a
second to translate every symbol into sound. The speed of this processing is so
crucial that even very slight disorders in processing the image, its colour, or
its contrast may suffice to make reading an arduous task.
Major
lesions in the left parieto-occipital area can make someone unable to read and/or
write while leaving their spoken-language abilities intact. In contrast, lesions
in auditory associative areas such as Wernicke’s area will prevent someone
both from understanding spoken language and from reading.
Increasingly,
however, results from brain-imaging studies are raising questions about the classic
model of localized language functions as proposed by Geschwind. These findings
argue instead for zones
of convergence and a more distributed concept of language areas, one that
implies parallel coding and processing of information.
The middle portion of the inferior
temporal gyrus (Brodmann area 37) is another associative area of convergence for
language. It is located between the visual cortex and the anterior temporal cortex
and becomes more active during various language-related tasks, such as reading,
or pronouncing the name of an object or a letter. As in the angular gyrus, a lesion
in the middle portion of the inferior temporal gyrus can cause deficits in reading
and in identifying objects.
Teachers used to believe that left-handed
children would necessarily have poor handwriting, and therefore forced them to
write with their right hands. Scientists long believed that left-handers were
more susceptible than right-handers to stuttering,
dyslexia,
and allergies. But recent studies have shown that left-handers are actually no
more subject to physical or psychological disorders than right-handers are.
Among older people, the proportion
of left-handers is surprisingly low: only 1 to 2% among people in their eighties,
compared with 13% among those in their twenties. In the debate over the reasons
for this difference, some argue that left-handers are more likely to die from
accidents and pathological conditions. But others say that being left-handed is
more socially acceptable nowadays, whereas many older people who were naturally
left-handed were forced to learn to use their right hand instead. The weight of
the evidence suggests that this latter, sociological explanation is more likely.
Both handedness and the grammatical,
sequential aspect of language seem to have been acquired
gradually in the course of human evolution. For example, certain
evidence suggests that 2 to 3 million years ago, 60 to 70% of all Australopithecus
were right-handed. In Homo habilis, who lived 1.5 million years ago,
an estimated 80% of the population was right-handed. Then, about 150 000 years
ago, approximately 90% of the first Homo sapiens were right-handed—roughly
the same percentage as today.
HANDEDNESS,
LANGUAGE, AND BRAIN LATERALIZATION
Brain lateralization
is the phenomenon whereby a given function is preferentially controlled by one
side of the brain rather than the other. The brain’s left and right hemispheres
are thus the site of distinct cognitive functions whose complementarity is ensured
by the corpus callosum, the main bundle of nerve fibres connecting the two hemispheres.
Lateralization seems to be an ingenious strategy that developed over
the course of human evolution to get the most out of the space available in the
brain. For instance, lateralization increases processing speed by avoiding the
long pathways that would otherwise be needed to connect regions on opposite sides
of the brain. Also, when two symmetrical areas on opposite sides of the brain
perform two different functions, the brain’s cognitive capacities are in
a sense doubled.
Handedness
and language are two highly lateralized functions. Though there is no direct
relationship between the two, the lateralization of the skilful hand does nevertheless
influence the lateralization of language. Many studies, using tests such as Wada’s
test (follow the Tool module link to the left), have shown that in 92 to 96% of
right-handed people, it is the left hemisphere that is specialized
for language.
For left-handed people,
things are a bit more complicated, and the results of studies are less consistent.
Some authors say that about 70% of left-handed people are left-lateralized for
language, 15% are right-lateralized, and 15% are ambilateral (their language functions
are more evenly distributed between their two hemispheres). But other studies
have found that only 15% of left-handed people are left-lateralized for language
while 15% are right-lateralized and 70% are ambilateral to varying extents.
The difference in the linguistic functioning of the two hemispheres raises
the possibility of anatomical differences between the language areas on the left
and right sides of the brain, because though the two hemispheres are similar in
overall appearance, they are not exact replicas of each other. The first descriptions
of asymmetries between language-related areas on the two sides of the brain date
back to the 19th century, when brain autopsies revealed that the lateral
sulcus was longer and shallower in the left hemisphere than in the right.
More
recently, tools such as magnetic resonance imaging (MRI) have discovered other
left/right asymmetries in language-processing areas of the brain. One of the most
significant asymmetries is observed in the temporal planum,
located on the superior surface of the temporal lobe. This triangular region,
which penetrates deep into the lateral sulcus, forms the heart of Wernicke’s
area, one of the most important functional areas for language.
The left temporal planum appears to be more developed
in 65% of all individuals, and the right temporal planum in only 10%. In some
people’s brains, the temporal planum is more than five times larger on the
left than on the right, making it the most asymmetrical structure in the brain.
This greater size of the left temporal planum compared with the right
is already present in
the fetus, where it can be observed starting from the 31st week of gestation.
This observation strengthens the hypothesis of a genetic
predisposition for brain asymmetry.
Because such an overwhelming majority
of people are right-handed, left-handed people quickly come to realize that they
are living in a world where the objects are designed for right-handed people.
From coffee pots to power tools to guitars, any items that are not perfectly symmetrical
are designed for “righties”. The results may range from minor inconvenience
to serious injuries and even death—statistics show, for instance, that the
frequency of highway and workplace accidents is higher among left-handed people
than among right-handed ones.
But being left-handed can also have some
advantages. For instance, in the sport of fencing, the majority of the opponents
that any contestant faces are right-handed. Hence, left-handed fencers will have
far more experience in parrying attacks from their right-handed opponents than
these opponents will have in repelling their left-handed attacks.
Certain stimuli that seem similar
can preferentially activate one hemisphere or the other, depending on the life
experiences of the individual concerned. For instance, for the average person,
recognizing
the position of the pieces on a chessboard is
a spatial task and hence is performed chiefly by the right hemisphere. But for
a chess master, it is more like interpreting a language that has its own grammar
and hence tends to be performed by the left hemisphere instead.
Music
can also activate the left hemisphere more than the right, depending on whether
the individual has musical training or not. This finding contradicts the earliest
hypotheses, which held that the musical functions of the brain were located exclusively
in the right hemisphere.
A greater overall contribution by
the right hemisphere among left-handed people might explain why the proportion
of left-handers is higher among mathematicians and musicians than among the general
population. Both of these professions apply visualization functions that are normally
located in this hemisphere.
THE RIGHT HEMISPHERE’S CONTRIBUTION TO LANGUAGE
Our social
interactions are based largely on our developing social intelligence regarding
other people—what some authors call a “theory
of mind”. The reason we need such intelligence is that in addition to
understanding the information conveyed explicitly by people’s words, we
must also constantly decipher their beliefs, intentions, knowledge, and affective
states if we are going to interact with these people effectively.
Consequently,
if we want to really understand a conversation with someone else, we need more
than a simple mastery of the basic elements of spoken human language, because
that person will also be expressing information non-verbally that alters the meaning
of what he or she is saying.
The phonological, syntactic, and lexical
aspects of this discourse are controlled by the left hemisphere, which is why
it was long considered the dominant
hemisphere for language.
The contributions of the right hemisphere
to language behaviour are more subtle and nuanced and were not recognized until
much later on. The right hemisphere provides the ability to go beyond the literal
meanings of words and employs multiple processes to do so. The new science of
communication from the perspective of the “minor hemisphere” for language
is called pragmatics.
The pragmatic
function is the ability to understand things that are implicitly signified in
discourse—for example, the meanings of metaphors, or of questions like “Do
you have a light?” When
right-handed people suffer damage to the right hemisphere of the brain, this
pragmatic function is affected, and they tend to interpret such metaphors and
questions literally. In fact, these people react exactly as if they were dealing
with idioms in a foreign language: their grammar and phonology may be correct,
but they do not understand the verbal humour or metaphors that native speakers
of that language use every day. Thus, by contributing to the emotional and tonal
components of language, the right hemisphere infuses verbal communication with
additional meanings.
Where does the pragmatic function reside in the
brains of left-handed people? Researchers are not sure. There are many possibilities,
depending on where the person’s other, conventional language abilities are
located: in the left hemisphere, the right hemisphere, or both.
More generally speaking,
the right hemisphere seems to show a predilection for spatial tasks and for recognizing
faces and music, while the left hemisphere appears more active in dealing with
language and with computational and logical tasks. But these are, of course, only
generalizations: in normal individuals, the two hemispheres work together, exchanging
information via the corpus callosum.
This interaction between the two hemispheres is
the basis for all the extensive co-operation that takes place within the central
nervous system and shows just how much the two hemispheres complement each other
for most functions, including language.
In attempting to compare the size
of certain homologous cortical areas in the left and right hemispheres, researchers
face two major obstacles. First, the variations
among individuals are often greater than the variations between hemispheres.
Second, the parts of the brain that display activity in functional brain images
do not necessarily coincide with precise cytoarchitectural regions of the brain.