A large proportion of the studies
done on the brain’s language functions since the 19th century have involved
establishing correlations between a particular language
deficit and the locations of lesions in the brains of autopsy subjects.
But a single lesion can sometimes cause damage to several brain structures at
once, which makes interpreting such findings difficult.
Modern brain
imaging techniques have made it possible to study the activation of the brain
areas associated with language in healthy subjects while they perform specified
language activities. These studies have confirmed the importance of Broca’s
and Wernicke’s areas for language while also identifying them as part of
a wider network of interconnected
areas of the brain that contribute to language. This concept has now
replaced the historical notion of language “centres”.
In bilingual people, the earlier
in life the second language was acquired, the more similar the areas of the brain
involved in understanding and producing the two languages. In contrast, brain-imaging
studies have shown that when people learn a second language later in life, the
areas of the cortex involved in understanding the two languages are not always
the same. Interestingly, when bilingual people lose
the use of one of their languages as the result of a brain injury,
the language that they retain is not always necessarily their mother tongue.
Indeed, bilingualism is a complex phenomenon, and much about its functional
bases remains unknown. For example, because Italian uses phonemes and syntax that
are much closer to French than to Chinese, will the brain of someone who is bilingual
in French and Italian operate differently from that of someone who is bilingual
in French and Chinese? Among people who are bilingual in French and Chinese, are
their differences between the brains of those whose mother tongue is Chinese and
those whose mother tongue is French? How does the frequency with which a person
uses a language affect the corresponding structures in their brains? Clearly,
the number of factors that may influence the language-processing areas of the
brains of bilingual persons is quite considerable.
BROCA’S AREA , WERNICKE’S AREA, AND OTHER LANGUAGE-PROCESSING
AREAS IN THE BRAIN
The process of identifying
the parts of the brain that are involved in language began in 1861, when Paul
Broca, a French neurosurgeon, examined the brain of a recently deceased patient
who had had an unusual disorder. Though he had been able to understand spoken
language and did not have any motor impairments of the mouth or tongue that might
have affected his ability to speak, he could neither speak a complete sentence
nor express his thoughts in writing. The only articulate sound he could make was
the syllable “tan”, which had come to be used as his name.
Paul Broca
Tan’s brain
When
Broca autopsied Tan’s brain, he found a sizable lesion in the left inferior
frontal cortex. Subsequently, Broca studied eight other patients, all of whom
had similar language deficits along with lesions in their left frontal hemisphere.
This led him to make his famous statement that “we speak with the left hemisphere”
and to identify, for the first time, the existence of a “language centre”
in the posterior portion of the frontal lobe of this hemisphere. Now known as
Broca’s area, this was in fact the first area of the brain to be associated
with a specific function—in this case, language.
Ten years later,
Carl Wernicke, a German neurologist, discovered another part of the brain, this
one involved in understanding language, in the posterior portion of the left temporal
lobe. People who had a lesion
at this location could speak, but their speech was often incoherent and made
no sense.
Carl Wernicke
Brain with a lesion causing Wernicke’s aphasia
Wernicke's observations
have been confirmed many times since. Neuroscientists now agree that running around
the lateral sulcus (also known as the fissure of Sylvius) in
the left hemisphere of the brain, there is a sort of neural loop that is involved
both in understanding and in producing spoken language. At the frontal end of
this loop lies Broca's area, which is usually associated with
the production of language, or language outputs . At the other end (more specifically,
in the superior posterior temporal lobe), lies Wernicke's area,
which is associated with the processing of words that we hear being spoken, or
language inputs. Broca's area and Wernicke's area are connected by a large bundle
of nerve fibres called the arcuate fasciculus.
This
language loop is found in the left hemisphere
in about 90% of right-handed persons and 70% of left-handed persons, language
being one of the functions that is performed asymmetrically in the brain. Surprisingly,
this loop is also found at the same location in deaf persons who use sign language.
This loop would therefore not appear to be specific to heard or spoken language,
but rather to be more broadly associated with whatever the individual’s
primary language modality happens to be.
A general problem encountered in
any attempt to determine the locations of brain functions is that every brain
is unique. Just as every normal human hand has five fingers, but everyone’s
hands are different, all human brains have the same major structures, but the
size and shape of these structures can vary from one individual to another—by
as much as several millimetres. Average measurements can be used, of course, in
studying the brain, but the fact remains that the same type of lesion will not
always cause exactly the same type of deficit in several different individuals.
With functional brain maps standardized for the sizes of various brains,
we obtain a reference that is useful but does not really correspond to the brain
of any one particular individual.
Often, when you have a word on the
tip of your tongue, you can remember what letter it starts with, or what sound
it ends with, or how many syllables it has, though you have not yet recalled the
word itself. This shows that accessing a word when you are preparing to speak
is not an all-or-nothing process: you can retrieve its various characteristics
independently of one another.
In general, women’s overall
reading abilities are better than men’s, and this gender difference often
makes itself apparent when children are still in primary school. Researchers are
still investigating how much of this difference is inborn and how much is acquired,
but part of the answer would seem to be that girls have a greater taste for reading.
Thus at least part of the reason that girls are better readers might simply be
that they spend more time reading, while boys often spend more time playing sports.
Some experts believe that increasing the time that boys spend reading and writing,
and offering them content that interests them, could reduce this gap between boys
and girls substantially.
Girls also seem to be better at spelling. The
explanation here might be that females use both
hemispheres of the brain in processing sounds, while males tend to
use mainly the left side. If girls are therefore better at isolating the various
sounds in a word, it would make sense that they would also be better at decoding
it and spelling it.
MODELS OF SPOKEN AND WRITTEN LANGUAGE FUNCTIONS IN THE BRAIN
A first model of the
general organization of language functions in the brain was proposed by American
neurologist Norman Geschwind in the 1960s and 1970s. This “connectionist”
model drew on the lesion studies done by Wernicke and his successors and is now
known as the Geschwind-Wernicke model. According to this model, each of the various
characteristics of language (perception, comprehension, production, etc.) is managed
by a distinct functional module in the brain, and each of these modules is linked
to the others by a very specific set of serial connections. The central hypothesis
of this model is that language disorders arise from breakdowns in this network
of connections between these modules.
According to this model,
when you hear a word spoken, this auditory signal is processed
first in your brain’s primary auditory cortex, which then sends it on to
the neighbouring Wernicke’s area.
Wernicke’s area associates the structure of this signal with the representation
of a word stored in your memory, thus enabling you to retrieve the meaning of
the particular word.
In contrast, when you read
a word out loud, the information is perceived first by your visual
cortex, which then transfers it to the angular
gyrus, from which it is sent on to Wernicke’s area.
Whether
you hear someone else speak a word or you read the word yourself, it is the mental
lexicon in Wernicke’s area that recognizes this word and correctly interprets
it according to the context. For you then to pronounce this word yourself,
this information must be transmitted via the arcuate fasciculus to a destination
in Broca’s area, which plans the pronunciation process. Lastly, this information
is routed to the motor cortex, which controls the muscles that you use to pronounce
the word.
The Wernicke-Geschwind model is thus based on the anatomical
location of areas of the brain that have distinct functions. On the whole, this
model provides a good understanding of the primary language disorders, such as
Broca’s
aphasia or Wernicke’s aphasia. But is also has its limitations. For
one thing, its assumption that the various areas involved in processing speech
are connected in series implies that one step must be completed before the next
one can begin, which is not always actually the case. Because this model also
fails to explain certain partial language disorders, other
models have been proposed to address these shortcomings.
In addition to semantic
memory, which lets us retain the various meanings of words, we must
use other specialized forms of memory in order to speak. For example, to pronounce
any given phoneme of a language that you know how to speak, you must place your
tongue and mouth in a particular position. They assume this position unconsciously,
but obviously you must have stored it in memory somewhere in your brain.
In some languages, such as Spanish, the relationship between spelling and
pronunciation is fairly straightforward, so it is fairly easy to retrieve the
pronunciation of a word when you read it. But in other languages, the exact same
string of letters may be pronounced very different ways in different words—for
instance, the “ough” in “thought”, “tough”,
“through” and “though”, in English, or the “ars”
in “jars”, “mars”, and “gars”, in French.
These arbitrary variations must be memorized as such, with no logical rules to
help.
The brain hemisphere in which the
main language abilities reside has often been referred to as the “dominant”
hemisphere for language. But since we now know that the
other hemisphere also contributes to language, it would be more
accurate to describe the two hemispheres as sharing responsibility for the many
aspects of language, rather than one hemisphere’s somehow exercising dominance
over the other.
Anthropologists have been able to
investigate handedness in ancient cultures by examining their tools. For example,
by examining the marks left on a flint ax by the blows struck to make it, researchers
can tell whether the person who did this work was right-handed or left-handed.
Researchers have also examined ancient art to see what proportions of people are
depicted using their right hand and what proportion using their left.
Researchers can generally estimate
how right-handed or left-handed someone is by asking him or her a simple set of
questions, such as “What hand do you write with?”, “What hand
do you use to throw a ball?”, and “What hand do you use to brush your
teeth?”
HANDEDNESS, LANGUAGE, AND BRAIN LATERALIZATION
Perhaps the most striking
anatomical characteristic of the human brain is that it is divided into two
hemispheres, so that it has two of almost every structure: one on the left
side and one on the right. But these paired structures are not exactly symmetrical
and often differ in their size, form, and function. This phenomenon is called
brain lateralization.
The two most lateralized functions
in the human brain are motor control and language. When a function is lateralized,
this often means that one side of the brain exerts more control over this function
than the other does. The side that exerts more control is often called the “dominant
hemisphere” for this function, but this expression can be somewhat misleading
(see sidebar).
Lateralization
of motor control is what determines whether someone is right-handed or left-handed.
When someone is ambidextrous—when they can use either hand as easily as
the other—it means that their brain is only partly lateralized or not at
all lateralized for motor control.
About
9 out of 10 adults are right-handed. This proportion seems to have remained stable
over many thousands of years and in all cultures in which it has been studied
(see sidebar).
Now, what about language—what is its “dominant”
hemisphere? And is there any correlation between handedness and language lateralization?
Considering how easily we can determine whether someone is right-handed or left-handed,
if there were such a correlation, it might prove very useful for research. And
indeed, this correlation does exist, but it is not perfect. In the vast majority
of right-handed people, language abilities are localized in the left hemisphere.
But contrary to what you might expect, the opposite is not true among left-handed
people, for whom the picture is less clear. Many “lefties” show a
specialization for language in the left hemisphere, but some show one in the right,
while for still others, both hemispheres contribute just about equally to language.
Though handedness does influence the brain hemisphere that people use
to speak, the left hemisphere does seem to have a natural predisposition for language,
and this predisposition
is reflected anatomically.
Even though language has a sort
of “music” to it, from a neurological standpoint, music and language
are distinct functions, because the sounds of music and the sounds of language
are processed in different parts of the brain. Here are two famous cases demonstrating
that language functions and musical functions are independent.
The first
case is the French composer Maurice Ravel. After suffering an injury to the left
side of his brain, he became aphasic. As a result, he could no longer transcribe
melodies, but he could still recognize them, which showed that his ability to
perceive music (as opposed to writing or performing it) had been preserved.
The other example is Ernesto “Che” Guevara. Though Che was a
brilliant speaker, he suffered from congenital amusia, which
made him completely unable to perceive music. Some nasty wits might say that the
only reason Che made revolution was to take out his frustration at not being able
to tell a salsa from a tango. But of course, they’d be wrong.
THE RIGHT HEMISPHERE’S CONTRIBUTION TO LANGUAGE
Verbal language is not the
only way that two people communicate with each other. Even before they open their
mouths, they are already communicating through various non-verbal mechanisms.
First of all, their physical appearance, the way they
dress, the way they carry themselves, and their general attitude all form a context
that lends a particular coloration to their verbal messages. Next, the particular
position of their bodies during conversation, the way their eyes move, the gestures
they make, and the ways they mimic each other will also impart a certain emotional
charge to what they say. There is also what is often called the music of language—the
variations in tone, rhythm, and inflection that alter the meanings of words.
When we are talking about language, it is therefore useful to distinguish
between verbal language—the literal meaning of the words—and everything
that surrounds these words and gives them a particular connotation. That is the
big difference between denoting and connoting: the message that is perceived never
depends solely on what is said, but always on how it is said
as well.
For example, if you ask
someone who has right hemisphere damage to tell you which of
the two pictures here best portrays the expression “She has a heavy heart”,
that person will point to the woman with the big heart on her sweater rather than
to the woman in tears. Similarly, if you remarked in a sarcastic tone that someone
was a really nice guy, a person with right-hemisphere damage would think you really
meant it.
When scientists first
began to investigate what functions are performed by the parts of the right hemisphere
that are homologous to the language areas of the left hemisphere, most of their
initial findings came from studying people who had lesions
in these parts of the right hemisphere.
Because the sign language used by
the deaf involves so many visual and spatial tasks, you might expect it to be
controlled by the right hemisphere. But in fact, the proportion of people who
are left-lateralized for language is just as high among deaf people who use sign
language as it is among people with normal hearing.