Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Apr 3.
Published in final edited form as: Cortex. 2010 Jun 16;48(2):242–254. doi: 10.1016/j.cortex.2010.06.001

What do brain lesions tell us about theories of embodied semantics and the human mirror neuron system?

Analia L Arévalo a,*, Juliana V Baldo a, Nina F Dronkers a,b,c
PMCID: PMC3615255  NIHMSID: NIHMS449181  PMID: 20621292

Abstract

Recent work has been mixed with respect to the notion of embodied semantics, which suggests that processing linguistic stimuli referring to motor-related concepts recruits the same sensorimotor regions of cortex involved in the execution and observation of motor acts or the objects associated with those acts. In this study, we asked whether lesions to key sensorimotor regions would preferentially impact the comprehension of stimuli associated with the use of the hand, mouth or foot. Twenty-seven patients with left-hemisphere strokes and 10 age- and education-matched controls were presented with pictures and words representing objects and actions typically associated with the use of the hand, mouth, foot or no body part at all (i.e., neutral). Picture/sound pairs were presented simultaneously, and participants were required to press a space bar only when the item pairs matched (i.e., congruent trials). We conducted two different analyses: 1) we compared task performance of patients with and without lesions in several key areas previously implicated in the putative human mirror neuron system (i.e., Brodmann areas 4/6, 1/2/3, 21 and 44/45), and 2) we conducted Voxel-based Lesion-Symptom Mapping analyses (VLSM; Bates et al., 2003) to identify additional regions associated with the processing of effector-related versus neutral stimuli. Processing of effector-related stimuli was associated with several regions across the left hemisphere, and not solely with premotor/motor or somatosensory regions. We also did not find support for a somatotopically-organized distribution of effector-specific regions. We suggest that, rather than following the strict interpretation of homuncular somatotopy for embodied semantics, these findings support theories proposing the presence of a greater motor-language network which is associated with, but not restricted to, the network responsible for action execution and observation.

Keywords: Embodiment, Mirror neurons, VLSM, Stroke, Embodied semantics

1. Introduction

Groundbreaking work from the early 1990s reported that a distinct group of neurons in area F5 of the macaque premotor cortex fire both when the monkey performs an action (i.e., execution) and when it observes someone else performing it (i.e., observation) (di Pellegrino et al., 1992). Much work since then has attempted to discover a similar ‘mirror neuron system’ in humans (HMNS). According to one version of ‘embodiment theory’, the sensorimotor regions of cortex normally involved in action execution are also recruited during action observation, planning and mental imagery (e.g., Rizzolatti and Craighero, 2004; Rizzolatti and Luppino, 2001; Rizzolatti et al., 2001, 1996; Gallese et al., 1996). A related theory – embodied semantics – is specific to humans, and suggests that the conceptual representations accessed during linguistic processing may also recruit those same sensorimotor regions (e.g., Kemmerer and Gonzalez-Castillo, 2010; Gallese and Lakoff, 2005; Pulvermüller, 2005; MacWhinney, 1999).

In a strict interpretation of this view, the same regions of premotor cortex should be recruited in all three conditions: 1) executing actions, 2) observing actions, and 3) processing/comprehending words referring to actions. More lenient versions of the theory might predict partially overlapping – but not identical – regions comprising a general motor-language network used across a number of linguistic tasks. Support for these more ‘lenient’ or ‘inclusive’ interpretations comes from reports of MNS-related activity in regions outside the motor and premotor cortices, including Broca’s area, somatosensory cortex, supplementary motor cortex, middle cingulate, temporal cortex, cerebellum, and the inferior/superior parietal lobule (e.g., Gazzola and Keysers, 2009; Kemmerer and Gonzalez-Castillo, 2010; de Zubicaray et al., 2008; Postle et al., 2008; Pobric and Hamilton, 2006; Möttönen et al., 2004; Tremblay et al., 2004; Tranel et al., 2003; Nishitani and Hari, 2002, 2000; Buccino et al., 2001; Rizzolatti et al., 1996).

While the macaque and human research to date have granted considerable credibility to the mirror properties of the first two conditions (i.e., executing and observing), the third condition – linguistic comprehension, or embodied semantics – has been more controversial. This is understandable, since this relatively recent addition to the theory requires experimentation in humans using various functional imaging methods. This type of work is arguably more complex and open to interpretation than work done with non-human primates.

Several recent fMRI, ERP and TMS studies testing embodied semantics presented healthy participants with lexical stimuli (either single words or sentences) referring to different body parts (mostly the hand, mouth and foot) (e.g., Pulvermüller et al., 2009; Esopenko et al., 2008; Kemmerer et al., 2008; Aziz-Zadeh et al., 2006; Buccino et al., 2005; Tettamanti et al., 2005; Hauk et al., 2004; Shtyrov et al., 2004). Activations for these stimuli were reported in somatotopically-organized regions. In other words, the pattern of peak activations for each effector followed, to varying degrees, the somatotopic organization of the motor homunculus first described by Penfield and Boldrey (1958). That is, moving along the motor strip in a ventral to dorsal direction, the mouth region is located ventrally, followed by the hand region, and then the foot region at the most dorsal end (Catani and Stuss, 2012).

However, critics of these findings suggest that the somatotopically-distributed activations reported in many of these studies are less than exact, and that a good match for all three effectors across tasks is rarely reported (Fernandino and Iacoboni, 2010; Turella et al., 2009). Many have also argued that the neurobiological boundaries of the primary and premotor cortices, as well as these regions’ somatotopic organization, have yet to be sharply defined (Kemmerer and Gonzalez-Castillo, 2010; Schubotz and von Cramon, 2003; Sanes and Schieber, 2001). Another major criticism is the noticeable lack of studies testing all three conditions in the same set of participants (Kemmerer and Gonzalez-Castillo, 2010; Dinstein et al., 2008). In response to this criticism, two recent studies that indeed tested all three conditions on one set of participants found that activations elicited by action word meaning representations did not match the activations observed for execution and observation (de Zubicaray et al., 2008; Postle et al., 2008).

Another critical consideration is that the evidence thus far only suggests that the putative human MNS may participate in and enhance language comprehension, but it does not confirm whether this system is necessary or sufficient to support such processing (Fischer and Zwaan, 2008). In other words, fMRI, PET and TMS studies can only tell us which brain regions participate in carrying out a given task. Lesion data, on the other hand, can help us understand which regions are in fact necessary for the task to be completed (Kemmerer and Gonzalez-Castillo, 2010; Mahon and Caramazza, 2009).

To date, very few studies have used lesion data to explore this area of inquiry. One set of studies with apraxic patients found that being unable to appropriately manipulate objects is not accompanied by an inability to recognize those same objects (Mahon et al., 2007; Negri et al., 2007; Rosci et al., 2003). In other words, at least for one group of patients, comprehension and execution are not subserved by the same brain regions. In our work with aphasic patients, we have chosen to focus on the comprehension component of the putative HMNS. In a previous study from our group, patients and controls showed a double dissociation on the ability to name and repeat action/object stimuli associated with manipulation; namely, patients were less accurate on manipulation-associated relative to neutral stimuli, while controls were relatively more accurate on the manipulation-associated stimuli (Arévalo et al., 2007). In that study, 60% of the patients who showed this significant ‘manipulability effect’ had a lesion in motor cortex or the nearby white matter.

In the current study, we tested the notion that a simple lexical semantic comprehension task would recruit the semantic embodiment component of the HMNS. We predicted that lesions due to stroke in key sensorimotor regions would impact patients’ accuracy in matching pictures and words associated with body parts, and extended our earlier work on manipulability by including stimuli associated with the mouth and foot.

2. Methods

2.1. Participants

Thirty-seven participants took part in this study: 27 (21 men, 6 women) with a history of a single, left-hemisphere stroke and 10 healthy control participants (6 men, 4 women). Patients and controls did not differ with respect to age, t(9) = −.45, p = .65, or education, t(9) = 1.17, p = .25.

All participants had normal, or corrected to normal, vision and hearing. They were all right-handed native English speakers, with no prior history of psychiatric or neurologic disorders. Only patients with a single, identifiable infarct confined to the left hemisphere were included (as assessed by a board-certified neurologist from each patient’s MRI and/or CT scan). Patients varied in their lesion location within the left hemisphere and in their degree of motor and language impairments (see Table 1 for patient information). The left-hemisphere patients were recruited from the VA Northern California Health Care System (Martinez, California, USA), and the age- and education-matched controls were recruited from the surrounding community. All participants were paid for their participation. Testing took place at the Center for Aphasia and Related Disorders, on the VA Northern California campus. Patients signed informed consent forms prior to participation, and the study was conducted in accordance with the Institutional Review Board at the VA and the Helsinki Declaration.

Table 1.

Patient information.

Patient code Age Gender TPO Edu Aphasia type 4/6 1/2/3 21 44/45 Lesion volume
120716 60 M 125 16 Anomic × 85
121021 62 M 65 11 Anomic × × 48
111058 63 F 79 18 Anomic × 104
121113 71 M 33 18 Anomic × 37
121274 45 M 19 14 Anomic × × × × 3
121029 57 M 145 16 Anomic 136
120979 50 M 75 16 Broca 158
120854 84 M 160 12 Broca 228
121032 68 M 192 16 Broca 229
121063 60 M 54 14 Conduction × × 101
111015 58 F 141 18 Conduction 182
111133 55 F 131 14 Conduction × 118
121138 73 M 31 17 Conduction × × 95
121137 64 M 23 12 Conduction × 96
121060 63 M 50 12 Wernicke 258
120806 82 M 189 14 Wernicke 220
120951 70 M 90 20 Wernicke × 104
120743 61 M 98 11 WNL × × 37
110729 65 F 108 18 WNL × × × 38
110997 54 F 60 17 WNL × × × 21
120896 65 M 97 16 WNL × × 85
121097 55 M 71 15 WNL × × 72
111018 58 F 66 17 WNL × × × × 2
121027 71 M 60 20 WNL × × 52
121284 52 M 14 17 WNL × × 47
120892 36 M 79 13 Unclassifiable × 194
121064 84 M 161 12 Unclassifiable × × × × 1

TPO (Time Post Onset: number of months since stroke at time of testing); Edu (education in number of years); Aphasia type (as assessed with the Western Aphasia Battery, Kertesz, 1982); 4/6,1/2/3, 21,44/45: presence (✓)or absence (×) of lesion in each of the Brodmann's areas; total lesion volume (in cc's).

2.2. Stimuli

The picture and word stimuli were 112 two-dimensional line drawings and their corresponding recorded words taken from the International Picture-Naming Project corpus from the Center for Research in Language at the University of California, San Diego (CRL-IPNP, Bates et al., 2000). Since researchers have previously extended the embodied semantics theory to include action-associated objects (i.e., nouns) as well as actions (i.e., verbs; e.g., Arévalo et al., 2007; Siri et al., 2008), we included equal numbers of actions and objects (or verbs and nouns) in our stimulus set. As a preliminary analysis, we tested whether differences in accuracy for body-part-associated versus neutral items would differ according to word category membership. The results revealed that ‘verb’ versus ‘noun’ status did not influence the degree of ‘motor effect’. We therefore collapsed all subsequent analyses across grammatical categories, and below we present the overall body-related (nouns and verbs) versus neutral item (nouns and verbs) comparisons.

Sixty-four items were actions/objects typically associated with one of the three body parts – hand (n= 32, e.g., camera, conduct), mouth (n= 14, e.g., lips, kiss) or foot (n= 18, e.g., skateboard, kick). The association of a stimulus with a particular body part was based on data from a previous study (Arévalo et al., 2004) in which healthy college-aged participants viewed the words from the CRL-IPNP (Bates et al., 2000) and were asked to do the first thing that came to mind when thinking of that word. Items were classified as either hand-, mouth- or foot-related if at least 70% of participants produced significant movements with one of those body parts when responding to each item (see Arévalo et al., 2004 for more details).

Performance on each set of body part-associated items (i.e., ‘Hand’, ‘Mouth’ and ‘Foot’) was directly compared to performance on an equal number of items selected from the same corpus which were classified as ‘neutral’. A total of 48 items from the corpus were given the neutral classification, which meant that they were not associated with the use of a body part (e.g., lighthouse, erupt). Each set of neutral items (32 for Hand, 14 for Mouth and 18 for Foot) was equated to one of the three effector-specific sets for the following variables: word frequency, objective visual complexity, grammatical class, and item difficulty (Appendices A–F; these ratings were previously calculated for the CRL-IPNP corpus, http://crl.ucsd.edu/~aszekely/ipnp/; for more details, see Székely et al., 2005; Arévalo, 2002; Székely and Bates, 2000). In addition, we obtained imageability ratings for our items from the MRC Psycholinguistic Database (Coltheart, 1981; http://www.psy.uwa.edu.au/mrcdatabase/uwa_mrc.htm) and confirmed that there were no significant differences in imageability across any of the item sets, all ps > .05. Therefore, although participants viewed all 112 stimuli items, the analyses below only include direct comparisons between carefully-matched subsets of items with equal numbers of items in each set: ‘Hand versus Hand Neutral’, ‘Mouth versus Mouth Neutral’, and ‘Foot versus Foot Neutral’. Appendices A–F list all body-related and neutral items, as well as each word’s frequency, each picture’s objective visual complexity, and each item’s Catch trial sound (i.e., the sound paired with each picture in the catch trials).

2.3. Procedure

All picture stimuli (from Bates et al., 2000) were presented on a white background using the Presentation experiment driver (www.neurobs.com). Each picture was paired with an aurally-presented word from the same corpus, which either matched or did not match the picture. Each picture was viewed twice: once accompanied by the matching word (congruent trial) and once accompanied by a non-matching word (catch trial). Catch trial words were the same set of spoken word stimuli presented in a different pre-randomized order, and each catch sound was carefully matched to the target picture for body part (or neutral status), grammatical class, frequency, objective visual complexity and difficulty (see Appendices A–F). Lists were presented in a pseudo-random order and were counterbalanced across participants.

Participants were asked to press the space bar only when the picture/word pairs matched. The choice of a bar-press for this study was done specifically to avoid a linguistic response, since a large proportion of our participants classified as aphasic. In addition, we chose to limit the responses only to the congruent trials, since a ‘forced choice’ design (indicating ‘yes’ for a match or ‘no’ for no match) can be challenging for older brain-injured patients, particularly those with more frontal lesions who may suffer from deficits in attention and working memory. Since half of the neutral as well as half of the body-related items were congruent, the amount of bar pressing across comparison conditions was equally balanced. Patients with hemiplegia used their ipsilesional (left) hand to press the space bar, while control participants and patients without hemiplegia responded with their right hand. However, this did not pose a problem, since all of our patients were at least one year post-stroke, which meant they were perfectly comfortable using their ipsilesional hand; also, the dependent variable of interest was accuracy, not response times, and participants were given ample time to respond. Practice trials with different items were given before the actual test was administered, and participants were given as many practice runs as needed to understand the task. Examples of trials are shown in Fig. 1.

Fig. 1.

Fig. 1

Examples of trials.

2.4. Lesion reconstructions

In most cases, patients’ lesions were visualized with high-resolution, T1-weighted structural 3D MRI scans. CT scans were acquired for patients unable to undergo MRI scanning. For cases where digital MRI images were available, lesions were traced directly onto patients’ T1 scans using MRIcro software (Rorden and Brett, 2000), and a board-certified neurologist blind to the patients’ diagnoses reviewed the reconstructions for accuracy. Using a procedure outlined by Brett et al. (2001), the scans were then non-linearly transformed into MNI space (152-MNI template) in SPM5. Lesion masks were created for each reconstruction so that the presence of the lesion would not distort the SPM normalization procedure (i.e., cost function masking).

For cases where digital MRI images were not available, lesions were reconstructed from available CT or MRI onto an 11-slice, standardized template (based on the atlas by DeArmond et al., 1976) by the same board-certified neurologist mentioned above (see Friedrich et al., 1998; Knight et al., 1988). The templates were then digitized with in-house software and non-linearly transformed into MNI space (Collins et al., 1994) using SPM5 running on Matlab software (Mathworks, Natick, MA).

2.5. Voxel-based Lesion-Symptom Mapping analyses (VLSM)

For Analysis 2, we used VLSM (Bates et al., 2003) to visualize all implicated brain regions at once on a single map. VLSM involves running a series of t-tests at every voxel to compare behavioral performance in patients with and without a lesion in that voxel. A colorized map is then generated based on the resultant t or p value at each voxel, with hotter colors representing more significant values. Therefore, VLSM shows to what extent individual voxels play a role in a particular task, and does not require one to select regions of interest a priori, thus allowing one to identify other important regions not considered through a stricter ‘region of interest’ analysis. Fig. 6 shows an overlay of all patients’ lesions, indicating the range of affected brain regions throughout the left hemisphere.

Fig. 6.

Fig. 6

Lesion map showing the extent and overlap of all 27 patients’ lesions. The color bar indicates degree of overlap of lesions, with the green regions representing half the group.

In the VLSM analyses, t-tests were confined to voxels where there were at least 10 patients in each group (i.e., with and without a lesion) in order to reduce spurious results. The VLSM analysis used a permutation testing procedure to determine a critical cluster size threshold (at p < .05), based on 1000 random permutations of the data (see Kimberg et al., 2007). Specifically, this analysis randomly reassigns the scores to the patients 1000 times, and for each permutated dataset, it refits the model and records the size of the largest clusters. We then generated a colorized map based on the resultant t values at each voxel. The VLSM maps show only those voxels reaching this critical threshold.

We also computed a power map to identify those voxels in which there was enough power to detect significant differences. Webased poweronan alpha of .05 and an effect size of .8 (Kimberg et al., 2007; Cohen, 1992, 1988). As can be seen in Fig. 7, there was adequate power throughout much of the middle cerebral artery territory, with less power in anterior, posterior and inferior regions. Therefore, predictions for the VLSM analyses were restricted to regions with adequate power.

Fig. 7.

Fig. 7

Map of power distribution, ranging from .4 (grey) to .8 or above (red) (.8 is an arbitrary cut-off used in previous VLSM studies; see Kimberg et al., 2007 for more details). Due to lower relative power, predictions in this study did not include very anterior, posterior and inferior regions.

3. Results

First, we analyzed the behavioral data from the group of healthy, age-matched control participants (n = 10) to ensure our task was appropriate for older subjects and that accuracy rates did not differ for body-related versus neutral items. The control group did not differ in accuracy for any of the comparisons: Hand versus Hand Neutral, t(9) = 1.14 (100% vs 99%), Mouth versus Mouth Neutral, t(9) = 1.73 (96% vs 98%), and Foot versus Foot Neutral, t(9) = −.58 (100% vs 99%), all ps > .05. Please see Table 2 for each patient’s score on all 6 conditions.

Table 2.

Patient accuracy on the 6 conditions.

Patient
code
Hand Hand
Neut
Mouth Mouth
Neut
Foot Foot
Neut
120716 98% 97% 93% 100% 92% 94%
121021 95% 95% 89% 96% 89% 94%
111058 94% 100% 96% 100% 100% 100%
121113 100% 92% 86% 93% 94% 97%
121274 98% 100% 93% 96% 100% 100%
121029 97% 97% 93% 96% 92% 97%
120979 98% 100% 93% 100% 97% 100%
120854 97% 97% 93% 93% 94% 97%
121032 98% 97% 93% 96% 92% 100%
121063 91% 86% 79% 86% 89% 92%
111015 95% 94% 89% 86% 83% 97%
111133 92% 95% 89% 82% 97% 94%
121138 95% 91% 86% 82% 83% 94%
121137 97% 97% 86% 93% 94% 100%
121060 70% 80% 71% 82% 61% 81%
120806 81% 78% 86% 82% 78% 92%
120951 91% 94% 89% 75% 81% 92%
120743 89% 95% 89% 93% 94% 97%
110729 100% 95% 93% 96% 100% 97%
110997 100% 100% 96% 100% 97% 97%
120896 100% 100% 93% 100% 97% 100%
121097 100% 100% 96% 100% 100% 100%
111018 100% 100% 100% 100% 100% 100%
121027 100% 98% 96% 93% 100% 97%
121284 98% 100% 96% 93% 100% 97%
120892 98% 97% 93% 96% 97% 97%
121064 97% 94% 93% 89% 100% 97%

3.1. Analysis 1: effects of specific lesions on accuracy for body-related versus neutral stimuli

For the first set of analyses, we assessed the contribution of four different regions that have previously been implicated in the putative HMNS: Brodmann area (BA) 4/6 (primary motor cortex and premotor cortex), BA 1/2/3 (somatosensory cortex), BA 21 (middle temporal gyrus) and BA 44/45 (posterior inferior frontal gyrus or Broca’s area). Patients were divided into groups according to whether or not they had a lesion in that region, which was determined using the voxel-based BA maps accessible via MRIcro (www.mricro.com). Although there was some overlap of patients across the four different comparisons, no two groups were exactly the same (see Table 1 for details).

3.1.1. BA 4/6 (primary motor cortex and premotor cortex)

Patients with lesions in BA 4/6 (n = 18) did not differ in accuracy for Hand versus Hand Neutral, t(17) = −.63, p = .53 (94% vs 95%), or Mouth versus Mouth Neutral, t(17) = 1.10, p = .27 (90% vs 92%), but did differ significantly on Foot versus Foot Neutral, t(17) = 3.72, p = .0002 (91% vs 96%), with Foot items being named less accurately than their matched neutral items. Similarly to the healthy control participants, patients without lesions in BA 4/6 (n = 9) did not differ significantly on any of the comparisons: Hand versus Hand Neutral, t(8) = 1.55, p = .12 (98% vs 96%), Mouth versus Mouth Neutral, t(8) = .69, p = .49 (92% vs 94%), and Foot versus Foot Neutral, t(8) = .91, p = .36 (96% vs 98%). Fig. 2 displays performance on all comparisons.

Fig. 2.

Fig. 2

Patients with versus patients without lesions in BA 4/6. Patients with lesions in BA 4/6 (n = 18) were significantly less accurate at matching Foot relative to Foot Neutral items [91% vs 96%, respectively; t(17) = 3.72, p = .0002]. Patients without lesions in BA 4/6 (n = 9) did not show any discrepancies in performance on any of the comparisons. Bar pairs with stars indicate significant comparisons.

3.1.2. BA 1/2/3 (somatosensory cortex)

Patients with lesions in BA 1/2/3 (n = 17) did not differ in accuracy for Hand versus Hand Neutral, t(16) = .47, p = .64 (95% vs 94%). There was a trend for reduced accuracy on the Mouth versus Mouth Neutral items, t(16) = 1.83, p = .07 (89% vs 93%), and they performed significantly worse on Foot versus Foot Neutral items, t(16) = 3.86, p = .0001 (91% vs 96%). Patients without lesions in BA 1/2/3 (n = 10), on the other hand, did not differ significantly on any of the comparisons: Hand versus Hand Neutral, t(9) = −.46, p = .65 (96% vs 97%), Mouth versus Mouth Neutral, t(9) = −.32, p = .75 (93% vs 92%), and Foot versus Foot Neutral, t(9) = .80, p = .43 (96% vs 97%; see Fig. 3).

Fig. 3.

Fig. 3

Patients with versus patients without lesions in BA 1/2/3. Patients with lesions in BA 1/2/3 (n = 17) were significantly less accurate at matching Foot relative to Foot Neutral items [91% vs 96%, respectively; t(16) = 3.86, p = .0001]. Patients without lesions in BA 1/2/3 (n = 10) did not show any discrepancies in performance on any of the comparisons. Bar pairs with stars indicate significant comparisons.

3.1.3. BA 21 (middle temporal cortex)

Patients with lesions in BA 21 (n = 15) did not differ significantly on Hand versus Hand Neutral items, t(14) = .09, p = .93 (93% vs 93%), or on Mouth versus Mouth Neutral items, t(14) = .44, p = .66 (89% vs 90%), but differed on Foot versus Foot Neutral items, t(14) = 3.79, p = .0002 (89% vs 95%), with Foot items being named less accurately. Patients without lesions in BA 21 (n = 12), on the other hand, did not differ significantly on any of the comparisons: Hand versus Hand Neutral, t(11) = .16, p = .87 (98% vs 97%), Mouth versus Mouth Neutral, t(11) = 1.76, p = .08 (93% vs 96%), and Foot versus Foot Neutral, t(11) = .86, p = .39 (97% vs 98%; see Fig. 4).

Fig. 4.

Fig. 4

Patients with versus patients without lesions in BA 21. Patients with lesions in BA 21 (n=15) were significantly less accurate at matching Foot relative to Foot Neutral items [89% vs 95%, respectively; t(14)=3.79, p = .0002]. Patients without lesions in BA 21 (n=12) did not show any discrepancies in performance on any of the comparisons. Bar pairs with stars indicate significant comparisons.

3.1.4. BA 44/45 (inferior frontal gyrus)

Patients with lesions in BA 44/45 (n = 17) did not differ significantly on Hand versus Hand Neutral, t(16) = −.64, p = .52 (94% vs 94%) or Mouth versus Mouth Neutral, t(16) = .91, p = .36 (90% vs 92%), but did differ on Foot versus Foot Neutral, t(16) = 3.63, p = .0003 (91% vs 96%), with Foot items being named less accurately than the matched neutral items. Patients without lesions in BA 44/45 (n = 10), on the other hand, did not differ significantly for any of the comparisons: Hand versus Hand Neutral, t(9) = 1.46, p = .14 (98% vs 96%), Mouth versus Mouth Neutral, t(9) = .96, p = .34 (91% vs 94%), and Foot versus Foot Neutral, t(9) = 1.11, p = .27 (96% vs 98%; see Fig. 5).

Fig. 5.

Fig. 5

Patients with versus patients without lesions in BA 44/45. Patients with lesions in BA 44/45 (n=17) were significantly less accurate at matching Foot relative to Foot Neutral items [91% vs 96%, respectively; t(16) = 3.63, p=.0003]. Patients without lesions in BA 44/45 (n=10) did not show any discrepancies in performance on any of the comparisons. Bar pairs with stars indicate significant comparisons.

Therefore, all four lesion groups (BA 4/6, BA 1/2/3, BA 21 and BA44/45) performed relatively worse on Foot versus Foot Neutral items. In addition, patients with lesions in BA 1/2/3 showed a trend for reduced accuracy on Mouth versus Mouth Neutral items. Patients in all four comparison groups (those without lesions in each region), on the other hand, did not differ significantly on any of the body part versus neutral item comparisons. This was similar to performance by the control participants.

3.2. Analysis 2: VLSM analyses

Next we generated 6 VLSM maps: Hand, Hand Neutral, Mouth, Mouth Neutral, Foot, and Foot Neutral. At a cluster threshold level of p < .05, the only two maps that revealed significant voxels were Hand and Foot (see Figs. 8 and 9). The regions which were significant for both maps (i.e., areas which when damaged resulted in processing deficits for both Hand and Foot items) included: left BA 6 (premotor cortex), BA 21 and 22 (middle and superior temporal cortex), BA 44/45 (posterior inferior frontal gyrus or Broca’s area), BA 42 (auditory association cortex), and BA 47 (inferior frontal gyrus). In addition, the Hand map included significant voxels in BA 38 (temporopolar cortex) and BA 41 (primary association cortex), while the Foot map included additional voxels in BA 4 (primary motor cortex).

Fig. 8.

Fig. 8

VLSM maps showing brain correlates of processing hand-related stimuli. Only significant voxels are shown. Significant areas included: BA 47, 38, 21/22, 41/42, 44/45 and 6.

Fig. 9.

Fig. 9

VLSM maps showing brain correlates of processing foot-related stimuli. Only significant voxels are shown. Significant areas included: BA 47, 21/22, 42, 44/45 and 4/6.

4. Discussion

There has been much discussion in recent years over whether an adequate counterpart to the macaque MNS exists in humans. The most ‘human’ component of this putative system – embodied semantics – has been tested extensively with a myriad of linguistic tasks designed for several types of functional neuroimaging experiments. These experiments have reported activation in response to action-associated linguistic stimuli in several regions normally involved in the execution and observation of those actions (e.g., Pulvermüller et al., 2009; Esopenko et al., 2008; Kemmerer et al., 2008; Aziz-Zadeh et al., 2006; Buccino et al., 2005; Tettamanti et al., 2005; Hauk et al., 2004; Shtyrov et al., 2004). Critics have countered such conclusions by arguing that the reported overlap of such regions across tasks is inconsistent, within as well as across different studies (Fernandino and Iacoboni, 2010; Turella et al., 2009; de Zubicaray et al., 2008; Postle et al., 2008).

While fMRI, PET and TMS studies can help identify regions which may participate in such tasks, lesion data can help us determine whether certain brain regions are necessary for such processing to take place. In the current study, 27 patients with left-hemisphere lesions due to stroke were asked to match picture and word stimuli typically associated with the use of the hand, mouth, foot, or no body part at all. In Analysis 1, we grouped patients based on the presence or absence of lesions in four key sensorimotor regions (BA 4/6, 1/2/3, 21, and 44/45). For all four comparisons, patients with lesions in each of the regions were less accurate on Foot versus Foot Neutral items. In addition, patients with lesions in BA 1/2/3 showed a trend for reduced accuracy on Mouth versus Mouth Neutral items. Therefore, this analysis revealed that lesions in a range of key areas previously associated with the putative HMNS can lead to a deficit for understanding foot-related concepts.

It is unclear why the foot stimuli elicited a stronger effect than the other two body parts tested. One possibility is that ‘Foot’ items are less salient than items referring to other body parts, or perhaps not as easy to depict in 2D drawings. However, these arguments cannot be reconciled with the fact that the ‘Foot’ items in this study did not differ from items in the other categories with respect to imageability, frequency, objective visual complexity or item difficulty.

With respect to the mouth, there was a trend for lesions in BA 1/2/3 to result in lower accuracy for Mouth relative to Mouth Neutral stimuli, but the effect was not as striking as the Foot item comparison. Some authors have distinguished between types of mouth-related actions as being either communicative or ingestive, each of which may carry its own set of evolutionary implications (Ferrari et al., 2003; Möttönen et al., 2004). Our set of stimuli did not allow us to match items based on this distinction.

Finally, what about the hand? In a previous study testing only manipulability and using stimuli from the same picture corpus, left-hemisphere patients were less accurate at naming pictures and repeating words referring to manipulable stimuli, while control participants were more accurate at naming those same items (Arévalo et al., 2007). However, that study differed from the current study in that the task required the oral production of a word (not only comprehension) and no bar pressing was involved. Previous studies have suggested that responding to stimuli referring to a specific effector (e.g., hand) by using that effector (e.g., manually pressing a space bar) can influence performance, but results have been mixed. While some authors have found an interference effect (i.e., slower RTs to hand items relative to foot items when responding with the hand; Sato et al., 2008), others have found a facilitation effect (e.g., Pulvermüller et al., 2001; Scorolli and Borghi, 2006). Although our study focused on accuracy rather than RT, we cannot entirely rule out the possibility that manually pressing a space bar could have facilitated responses to the Hand items (relative to the neutral items), thus reducing a difference in performance on Hand versus Hand Neutral items.

One possible way to avoid this problem is to have participants use the congruent effector for each body-associated set of items (i.e., respond to hand items with the hand, to mouth items with the mouth, and to foot items with the foot). However, this design choice was not possible for the current study, given that some patients had limited mobility, in addition to deficits in attention and memory. It is plausible that such inherent differences between hand, mouth and foot stimuli could be driving some of the results obtained with the current task, and future studies will consider these issues.

Clearly, dividing patients according to presence or absence of a lesion has implicit limitations. For example, it cannot tell us whether lesions of different sizes or lesions confined to specific subregions of the target area would have a greater or lesser impact on performance. Therefore, in Analysis 2, we used VLSM (Bates et al., 2003) which analyzes data on a voxel-by-voxel basis rather than restricting comparisons to specific regions (BA or otherwise). Under a strict statistical correction procedure, only the Hand and Foot maps had significant voxels. These regions included left BA 6 (premotor cortex), BA 21/22 (middle and superior temporal cortex), BA 44/45 (posterior inferior frontal gyrus or Broca’s area), BA 42 (auditory association cortex) and BA 47 (inferior frontal gyrus). In addition, the Hand map included BA 38 (temporopolar cortex) and BA 41 (primary association cortex), while the Foot map included BA 4 (primary motor cortex). Therefore, two of the effector-associated maps revealed significant voxels in most regions tested in Analysis 1 (with the exception of BA 1/2/3), as well as in some additional left frontal and temporal regions. There was no evidence for any type of somatotopic organization of the effectors.

It is unclear why BA 1/2/3 did not appear as an area of significant involvement in the VLSM maps. It is possible that voxels in this region did not survive the strict statistical threshold we established for our analyses. The same threshold issue might be at play when considering the third body map, Mouth. Alternatively, the difference across the three maps might reflect the quality of the stimuli or the nature of the effectors themselves, as discussed above. It is important to point out that VLSM maps are limited by the power available in certain regions of the brain relative to others. This limited our ability to identify lesions in very dorsal ‘foot’ regions, relative to more ventral ‘hand’ and ‘mouth’ regions. However, as illustrated in the power map shown in Fig. 7, there was adequate power to detect differences relating to HMNS involvement in key regions previously implicated in this system.

In sum, our results suggest that there is interaction between motor networks and the language network in humans, and that these associations do not seem to be confined to a particular region in premotor/motor cortex. Rather, motor-language areas appear to be spread over a range of different cortical regions, which in this study can only be confirmed for the left hemisphere. Damage to certain regions of this language-motor network will not completely block patients’ ability to process motor-associated concepts, but can result in lower relative accuracy on some effector-associated stimuli. One issue to consider is that this task as it stands cannot tell us whether the degree of the engagement of motor regions reflects basic semantic processing in and of itself, or whether it is due to some post-comprehension cognitive operations, such as motor imagery (Boulenger et al., 2006; Kemmerer and Gonzalez-Castillo, 2010; Tomasino et al., 2008). Perhaps motor imagery is a strategy used by some people but not others, and this type of individual variation may yield different results and hence different interpretations, especially when relying on group results. Future work could focus on disentangling such possible processing strategies with specially designed tasks sensitive to such distinctions.

Our current results have implications for rehabilitation work as well. Motor imagery, along with linguistic and motor tasks, have been of interest to investigators who are working on developing rehabilitation therapies for stroke patients. These groups’ aim is to facilitate language through motor tasks (or vice versa), by taking advantage of the connections between the language and action systems in the brain (e.g., Sharma et al., 2009, 2006; Pulvermüller and Berthier, 2008; Buxbaum et al., 2005; Johnson-Frey, 2004; Catani et al., 2012).

Although there is abundant evidence for the existence of a MNS in the macaque and a similar counterpart in humans, we suggest that the third component proposed for the human MNS – embodied semantics – is not controlled by the same regions that subserve the execution and observation of actions. In agreement with several studies, our lesion data suggest that a number of regions in premotor and motor cortex, as well as additional regions in frontal and temporal cortex, play a complementary rather than central role in processing words referring to motor-related concepts.

Acknowledgments

Support during the preparation of this work was provided by NIH/NIDCD 3 R01 DC00216, by the Department of Veterans Affairs Medical Research, the National Institute of Neurological Disorders and Stroke (NS040813), and the National Institute on Deafness and other Communication Disorders (DC00216). We would like to thank all the patients who participated in the study.

Appendix

Appendix A.

Hand items. Each picture was presented twice to each participant, once with a matching sound and once with a ‘Catch Sound’. Freq: frequency of the target word; OVC: objective visual complexity of the picture representing each word.

Item Picture Object/
action
Easy/
hard
Freq OVC Catch
Sound
l Book Object Easy 6.08 8619 Typewriter
2 Camera Object Easy 3.61 16,408 Book
3 Comb Object Easy 1.80 28,324 Ax
4 Hammer Object Easy 2.49 9533 Pencil
5 Key Object Easy 4.47 7493 Knife
6 Knife Object Easy 3.81 8773 Hammer
7 Pencil Object Easy 3.00 7899 Key
8 Typewriter Object Easy 2.49 28,850 Camera
9 Ax Object Hard 2.30 7849 Paintbrush
10 Drill Object Hard 2.20 16,254 Pencil sharpener
11 Lock Object Hard 2.77 9706 Yoyo
12 Paintbrush Object Hard .69 7932 Comb
13 Pencil sharpener Object Hard .00 19,617 Screw driver
14 Plug Object Hard 2.30 11,385 Lock
15 Screwdriver Object Hard 1.39 9051 Drill
16 Yoyo Object Hard .00 8066 Plug
17 Cut Action Easy 5.25 15,235 Operate
18 Paint Action Easy 4.29 22,022 Fold
19 Rake Action Easy 1.95 15,121 Write
20 Scoop Action Easy 2.08 24,485 Squeeze
21 Squeeze Action Easy 3.37 17,216 Tie
22 Tie Action Easy 4.13 23,682 Conduct
23 Type Action Easy 2.89 19,194 Rake
24 Zip Action Easy 1.10 24,128 Scoop
25 Break Action Hard 5.44 21,546 Cut
26 Conduct Action Hard 3.66 13,067 Dust
27 Dust Action Hard 2.20 13,403 Zip
28 Fold Action Hard 3.66 24,426 Sew
29 Operate Action Hard 4.42 21,850 Paint
30 Sew Action Hard 2.49 23,884 Unlock
31 Unlock Action Hard 2.77 13,709 Type
32 Write Action Hard 6.14 16,774 Break

Appendix B.

Hand control items. These are the neutral (not body-related) items matched and compared to the Hand items in Appendix A. Each picture was presented twice to each participant, once with a matching sound and once with a ‘Catch Sound’. Freq: frequency of the target word; OVC: objective visual complexity of the picture representing each word.

Item Picture Object/
action
Easy/
hard
Freq OVC Catch
Sound
l Airplane Object Easy 1.95 16,810 Rain
2 Bridge Object Easy 4.21 27,543 Fence
3 Castle Object Easy 3.33 22,746 Helicopter
4 Fence Object Easy 3.43 17,349 Castle
5 House Object Easy 6.41 18,069 Moon
6 Lightning Object Easy 2.71 30,782 Airplane
7 Moon Object Easy 4.09 3730 Lightning
8 Rain Object Easy 4.29 20,795 Pool
9 Chimney Object Hard 2.40 9730 Pillar
10 Hinge Object Hard 1.61 6973 Fire hydrant
11 Igloo Object Hard .69 9673 Bathtub
12 Pillar Object Hard 2.83 11,413 Statue
13 Statue Object Hard 3.18 7359 Submarine
14 Submarine Object Hard 2.89 12,481 Tractor
15 Tractor Object Hard 2.49 9518 Windmill
16 Windmill Object Hard 2.30 12,430 Chimney
17 Bow Action Easy 2.83 15,564 Crawl
18 Dive Action Easy 2.64 16,005 Fly
19 Fly Action Easy 4.58 13,178 Sail
20 Hug Action Easy 2.49 16,095 Think
21 Sail Action Easy 3.05 18,904 Bow
22 Sit Action Easy 6.22 18,449 Sleep
23 Sleep Action Easy 4.87 33,733 Watch
24 Surf Action Easy .00 20,492 Snow
25 Crash Action Hard 3.00 8351 Surf
26 Curtsey Action Hard .69 14,133 Wag
27 Drip Action Hard 2.40 15,971 Melt
28 Melt Action Hard 3.22 19,825 Sweat
29 Snow Action Hard 1.61 44,104 Curtsey
30 Sweat Action Hard 2.89 16,947 Erupt
31 Think Action Hard 7.60 25,052 Stand
32 Wag Action Hard 1.61 19,445 Hug

Appendix C.

Mouth items. Each picture was presented twice to each participant, once with a matching sound and once with a ‘Catch Sound’. Freq: frequency of the target word; OVC: objective visual complexity of the picture representing each word.

Item Picture Object/
action
Easy/
hard
Freq OVC Catch
Sound
l Lips Object Easy .00 6586 Teeth
2 Teeth Object Hard 1.39 8898 Lips
3 Bite Action Easy 2.49 18,076 Lick
4 Blow Action Easy 3.33 24,562 Sing
5 Chew Action Easy 4.09 31,961 Suck
6 Cry Action Easy 4.37 23,644 Kiss
7 Kiss Action Easy 4.44 19,790 Blow
8 Laugh Action Easy 4.80 22,897 Talk
9 Lick Action Easy 5.09 40,153 Laugh
10 Sing Action Easy 5.14 39,099 Cry
11 Smile Action Hard 3.05 21,375 Bite
12 Suck Action Hard 3.14 27,347 Yell
13 Talk Action Hard 3.61 32,379 Smile
14 Yell Action Hard 6.24 15,863 Chew

Appendix D.

Mouth control items. These are the neutral (not body-related) items matched and compared to the Mouth items in Appendix C. Each picture was presented twice to each participant, once with a matching sound and once with a ‘Catch Sound’. Freq: frequency of the target word; OVC: objective visual complexity of the picture representing each word.

Item Picture Object/
action
Easy/
hard
Freq OVC Catch
Sound
l Hinge Object Hard 1.61 6973 Fire hydrant
2 Igloo Object Hard .69 9673 Bathtub
3 Burn Action Easy 4.49 31,906 Crash
4 Sail Action Easy 3.05 18,904 Bow
5 Sleep Action Easy 4.87 33,733 Watch
6 Surf Action Easy .00 20,492 Snow
7 Watch Action Easy 5.53 25,732 Burn
8 Erupt Action Hard 1.95 27,002 Wait
9 Hide Action Hard 4.63 25,967 Drip
10 Melt Action Hard 3.22 19,825 Sweat
11 Snow Action Hard 1.61 44,104 Curtsey
12 Stand Action Hard 6.15 19,300 Sit
13 Think Action Hard 7.60 25,052 Stand
14 Wait Action Hard 5.77 21,443 Hide

Appendix E.

Foot items. Each picture was presented twice to each participant, once with a matching sound and once with a ‘Catch Sound’. Freq: frequency of the target word; OVC: objective visual complexity of the picture representing each word.

Item Picture Object/
action
Easy/
hard
Freq OVC Catch
Sound
l Boot Object Easy 3.69 8857 Roller skate
2 Foot Object Easy 5.79 7638 Shoe
3 Roller skate Object Easy .00 16,620 Skateboard
4 Shoe Object Easy 4.38 14,105 Slipper
5 Skateboard Object Easy .69 14,225 Boot
6 Heel Object Hard 3.40 14,448 Stairs
7 Slipper Object Hard 2.30 13,837 Unicycle
8 Stairs Object Hard 3.81 11,221 Toe
9 Toe Object Hard 3.40 27,602 Heel
10 Unicycle Object Hard .00 15,263 Foot
11 Chase Action Easy 3.05 20,541 March
12 Kick Action Easy 3.76 17,222 Jump
13 Skate Action Easy 1.39 17,040 Trip
14 Walk Action Easy 5.74 14,385 Skate
15 Jump Action Hard 4.22 15,496 Walk
16 March Action Hard 3.43 33,014 Slip
17 Slip Action Hard 4.13 27,692 Chase
18 Trip Action Hard 2.08 20,799 Kick

Appendix F.

Foot control items. These are the neutral (not body-related) items matched and compared to the Foot items in Appendix E. Each picture was presented twice to each participant, once with a matching sound and once with a ‘Catch Sound’. Freq: frequency of the target word; OVC: objective visual complexity of the picture representing each word.

Item Picture Object/
action
Easy/
hard
Freq OVC Catch
Sound
l House Object Easy 6.41 18,069 Moon
2 Lightning Object Easy 2.71 30,782 Airplane
3 Moon Object Easy 4.09 3730 Lightning
4 Hinge Object Hard 1.61 6973 Fire hydrant
5 Igloo Object Hard .69 9673 Bathtub
6 Lighthouse Object Hard 1.39 31,692 Hinge
7 Statue Object Hard 3.18 7359 Submarine
8 Submarine Object Hard 2.89 12,481 Tractor
9 Tractor Object Hard 2.49 9518 Windmill
10 Windmill Object Hard 2.30 12,430 Chimney
11 Bow Action Easy 2.83 15,564 Crawl
12 Burn Action Easy 4.49 31,906 Crash
13 Fly Action Easy 4.58 13,178 Sail
14 Crash Action Hard 3.00 8351 Surf
15 Curtsey Action Hard .69 14,133 Wag
16 Drip Action Hard 2.40 15,971 Melt
17 Snow Action Hard 1.61 44,104 Curtsey
18 Think Action Hard 7.60 25,052 Stand

References

  1. Arévalo A. Teasing apart actions and objects: A picture naming study. CRL Newsletter. 2002 May14(2) [Google Scholar]
  2. Arévalo A, Perani D, Cappa SF, Butler A, Bates E, Dronkers N. Action and object processing in aphasia: From nouns and verbs to the effect of manipulability. Brain and Language. 2007;100(1):79–94. doi: 10.1016/j.bandl.2006.06.012. [DOI] [PubMed] [Google Scholar]
  3. Arévalo A, Butler A, Perani D, Cappa S, Bates E. Technical Report CRL-0401. La Jolla: University of California, San Diego, Center for Research in Language; 2004. Introducing the Gesture Norming Study: A Tool for Understanding On-line Word and Picture Processing. [Google Scholar]
  4. Aziz-Zadeh L, Wilson SM, Rizzolatti G, Iacoboni M. Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Current Biology. 2006;16(1):1818–1823. doi: 10.1016/j.cub.2006.07.060. [DOI] [PubMed] [Google Scholar]
  5. Bates E, Andonova E, D’Amico S, Jacobsen T, Kohnert K, Lu C-C, et al. Center for Research in Language Newsletter. 1. vol 12. La Jolla: University of California San Diego; 2000. Introducing the CRL International Picture-Naming Project (CRL-IPNP) [Google Scholar]
  6. Bates E, Wilson S, Saygin AP, Dick F, Sereno M, Knight RT, et al. Voxel-based lesion-symptom mapping. Nature Neuroscience. 2003;6(1):448–450. doi: 10.1038/nn1050. [DOI] [PubMed] [Google Scholar]
  7. Boulenger V, Roy AC, Paulignan Y, Deprez V, Jeannerod M, Nazir TA. Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. Journal of Cognitive Neuroscience. 2006;18(10):1607–1615. doi: 10.1162/jocn.2006.18.10.1607. [DOI] [PubMed] [Google Scholar]
  8. Brett M, Leff AP, Rorden C, Ashburner J. Spatial normalization of brain images with focal lesions using cost function masking. NeuroImage. 2001;14(1):486–500. doi: 10.1006/nimg.2001.0845. [DOI] [PubMed] [Google Scholar]
  9. Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, et al. Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study. European Journal of Neuroscience. 2001;13(1):400–404. [PubMed] [Google Scholar]
  10. Buccino G, Riggio L, Melli G, Binkofsi F, Gallese V, Rizzolatti G. Listening to action-related sentences modulates the activity of the motor system: A combined TMS and behavioral study. Cognitive Brain Research. 2005;24(1):355–363. doi: 10.1016/j.cogbrainres.2005.02.020. [DOI] [PubMed] [Google Scholar]
  11. Buxbaum LJ, Kyle KM, Menon R. On beyond mirror neurons: Internal representations subserving imitation and recognition of skilled object-related actions in humans. Cognitive Brain Research. 2005;25(1):226–239. doi: 10.1016/j.cogbrainres.2005.05.014. [DOI] [PubMed] [Google Scholar]
  12. Catani M, Stuss DT. At the forefront of clinical neuroscience. Cortex. 2012;48(1):1–6. doi: 10.1016/j.cortex.2011.11.001. [DOI] [PubMed] [Google Scholar]
  13. Catani M, Dell’Acqua F, Vergani F, Malik F, Hodge H, Roy P, et al. Short frontal lobe connections of the human brain. Cortex. 2012;48(2):273–291. doi: 10.1016/j.cortex.2011.12.001. [DOI] [PubMed] [Google Scholar]
  14. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Earlbaum; 1988. [Google Scholar]
  15. Cohen J. A power primer. Psychological Bulletin. 1992;112(1):155–159. doi: 10.1037//0033-2909.112.1.155. [DOI] [PubMed] [Google Scholar]
  16. Coltheart M. The MRC psycholinguistic database. The Quarterly Journal of Experimental Psychology. 1981;33(4):497–505. [Google Scholar]
  17. Collins DL, Neelin P, Peters TM, Evans AC. Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of Computer Assisted Tomography. 1994;18(1):192–205. [PubMed] [Google Scholar]
  18. DeArmond SJ, Fusco MM, Dewey MM. Structure of the Human Brain: A Photographic Atlas. 2nd ed. New York: Oxford University Press; 1976. [Google Scholar]
  19. de Zubicaray G, Postle N, McMahon K, Meredith M, Ashton R. Mirror neurons, the representation of word meaning, and the foot of the third left frontal convolution. Brain and Language. 2008 doi: 10.1016/j.bandl.2008.09.011. [DOI] [PubMed] [Google Scholar]
  20. Dinstein I, Thomas C, Behrmann M, Heeger DJ. A mirror up to nature. Current Biology. 2008;18(1):R13–R18. doi: 10.1016/j.cub.2007.11.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G. Understanding motor events: A neurophysiological study. Experimental Brain Research. 1992;91(1):176–180. doi: 10.1007/BF00230027. [DOI] [PubMed] [Google Scholar]
  22. Esopenko C, Borowsky R, Cummine J, Sarty G. Mapping the semantic homunculus: A functional and behavioural analysis of overt semantic generation. Brain Topography. 2008;21(1):22–35. doi: 10.1007/s10548-008-0043-8. [DOI] [PubMed] [Google Scholar]
  23. Fernandino L, Iacoboni M. Are cortical motor maps based on body parts or coordinated actions? Implications for embodied semantics. Brain and Language. 2010;112(1):44–53. doi: 10.1016/j.bandl.2009.02.003. [DOI] [PubMed] [Google Scholar]
  24. Ferrari PF, Gallese V, Rizzolatti G, Fogassi L. Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. European Journal of Neuroscience. 2003;17(1):1703–1714. doi: 10.1046/j.1460-9568.2003.02601.x. [DOI] [PubMed] [Google Scholar]
  25. Fischer MH, Zwaan RA. Embodied language:Areview of the role of the motor system in language comprehension. The Quarterly Journal of Experimental Psychology. 2008;61(6):825–850. doi: 10.1080/17470210701623605. [DOI] [PubMed] [Google Scholar]
  26. Friedrich FJ, Egly R, Rafal RD, Beck D. Spatial attention deficits in humans: A comparison of superior parietal and temporal–parietal junction lesions. Neuropsychology. 1998;12(1):193–207. doi: 10.1037//0894-4105.12.2.193. [DOI] [PubMed] [Google Scholar]
  27. Gallese V, Fadiga L, Fogassi L, Rizzolatti G. Action recognition in the premotor cortex. Brain. 1996;119(1):593–609. doi: 10.1093/brain/119.2.593. [DOI] [PubMed] [Google Scholar]
  28. Gallese V, Lakoff G. The brain’s concepts: The role of the sensory-motor system in reason and language. Cognitive Neuropsychology. 2005;22(1):455–479. doi: 10.1080/02643290442000310. [DOI] [PubMed] [Google Scholar]
  29. Gazzola V, Keysers C. The observation and execution of actions share motor and somatosensory voxels in all tested subjects: Single-subject analyses of unsmoothed fMRI data. Cerebral Cortex. 2009;19(1):1239–1255. doi: 10.1093/cercor/bhn181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Hauk O, Johnsrude I, Pulvermüller F. Somatotopic representation of action words in human motor and premotor cortex. Neuron. 2004;41(1):301–307. doi: 10.1016/s0896-6273(03)00838-9. [DOI] [PubMed] [Google Scholar]
  31. Johnson-Frey SH. Stimulation through simulation? Motor imagery and functional reorganization in hemiplegic stroke patients. Brain and Cognition. 2004;55(1):328–331. doi: 10.1016/j.bandc.2004.02.032. [DOI] [PubMed] [Google Scholar]
  32. Kemmerer D, Gonzalez-Castillo J. The two-level theory of verb meaning: An approach to integrating the semantics of action with the mirror neuron system. Brain and Language. 2010;112(1):54–76. doi: 10.1016/j.bandl.2008.09.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Kemmerer D, Gonzalez Castillo J, Talavage T, Patterson S, Wiley C. Neuroanatomical distribution of five semantic components of verbs: Evidence from fMRI. Brain and Language. 2008;107(1):16–43. doi: 10.1016/j.bandl.2007.09.003. [DOI] [PubMed] [Google Scholar]
  34. Kertesz A. Western Aphasia Battery. New York: Grune and Stratton; 1982. [Google Scholar]
  35. Kimberg DY, Coslett HB, Schwartz MF. Power in voxel-based lesion-symptom mapping. Journal of Cognitive Neuroscience. 2007;19(1):1067–1080. doi: 10.1162/jocn.2007.19.7.1067. [DOI] [PubMed] [Google Scholar]
  36. Knight RT, Scabini D, Woods DL, Clayworth C. The effects of lesions of superior temporal gyrus and inferior parietal lobe on temporal and vertex components of the human AEP. Electroencephalography and Clinical Neurophysiology. 1988;70(1):499–509. doi: 10.1016/0013-4694(88)90148-4. [DOI] [PubMed] [Google Scholar]
  37. MacWhinney B. The emergence of grammar from embodiment. In: MacWhinney B, editor. The Emergence of Language. Mahwah, NJ: Lawrence Erlbaum; 1999. pp. 213–256. [Google Scholar]
  38. Mahon BZ, Caramazza A. Concepts and categories: A cognitive neuropsychological perspective. Annual Review of Psychology. 2009;60(1):27–51. doi: 10.1146/annurev.psych.60.110707.163532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Mahon BZ, Milleville S, Negri GAL, Rumiati RI, Martin A, Caramazza A. Action-related properties of objects shape object representations in the ventral stream. Neuron. 2007;55(1):507–520. doi: 10.1016/j.neuron.2007.07.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Möttönen R, Järveläinen J, Sams M, Hari R. Viewing speech modulates activity in the left SI mouth cortex. NeuroImage. 2004;24(1):731–737. doi: 10.1016/j.neuroimage.2004.10.011. [DOI] [PubMed] [Google Scholar]
  41. Negri GAL, Rumiati RI, Zadini A, Ukmar M, Mahon BZ, Caramazza A. What is the role of motor simulation in action and object recognition? Evidence from Apraxia. Cognitive Neuropsychology. 2007;24(8):795–816. doi: 10.1080/02643290701707412. [DOI] [PubMed] [Google Scholar]
  42. Nishitani N, Hari R. Temporal dynamics of cortical representation for action. Proceedings of the National Academy of Sciences USA. 2000;97(1):913–918. doi: 10.1073/pnas.97.2.913. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Nishitani N, Hari R. Viewing lip forms: Cortical dynamics. Neuron. 2002;36(1):1211–1220. doi: 10.1016/s0896-6273(02)01089-9. [DOI] [PubMed] [Google Scholar]
  44. Penfield W, Boldrey E. Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain. 1958;60(1):389–443. [Google Scholar]
  45. Pobric G, Hamilton AFde C. Action understanding requires the left inferior frontal cortex. Current Biology. 2006;16(1):524–529. doi: 10.1016/j.cub.2006.01.033. [DOI] [PubMed] [Google Scholar]
  46. Postle N, McMahon KL, Ashton R, Meredith M, de Zubicaray GI. Action word meaning representations in cytoarchitectonically defined primary and premotor cortices. NeuroImage. 2008;43(1):634–644. doi: 10.1016/j.neuroimage.2008.08.006. [DOI] [PubMed] [Google Scholar]
  47. Pulvermüller F. Brain mechanisms linking language and action. Nature Reviews Neuroscience. 2005;6(1):576–582. doi: 10.1038/nrn1706. [DOI] [PubMed] [Google Scholar]
  48. Pulvermüller F, Berthier ML. Aphasia therapy on a neuroscience basis. Aphasiology. 2008;22(6):563–599. doi: 10.1080/02687030701612213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Pulvermüller F, Harle M, Hummel F. Walking or talking: Behavioral and neurophysiological correlates of action verb processing. Brain and Language. 2001;78(1):143–168. doi: 10.1006/brln.2000.2390. [DOI] [PubMed] [Google Scholar]
  50. Pulvermüller F, Kherif F, Hauk O, Mohr B, Nimmo-Smith I. Distributed cell assemblies for general lexical and category-specific semantic processing as revealed by fMRI cluster analysis. Human Brain Mapping. 2009;30(12):3837–3850. doi: 10.1002/hbm.20811. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Rizzolatti G, Craighero L. The mirror-neuron system. Annual Review of Neuroscience. 2004;27(1):169–192. doi: 10.1146/annurev.neuro.27.070203.144230. [DOI] [PubMed] [Google Scholar]
  52. Rizzolatti G, Fadiga L, Matelli M, Bettinardi V, Paulesu E, Perani D, et al. Localization of grasp representations in humans by PET: 1. Observation versus execution. Experimental Brain Research. 1996;111(1):246–252. doi: 10.1007/BF00227301. [DOI] [PubMed] [Google Scholar]
  53. Rizzolatti G, Fogassi L, Gallese V. Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience. 2001;2(1):661–670. doi: 10.1038/35090060. [DOI] [PubMed] [Google Scholar]
  54. Rizzolatti G, Luppino G. The cortical motor system. Neuron. 2001;31(1):889–901. doi: 10.1016/s0896-6273(01)00423-8. [DOI] [PubMed] [Google Scholar]
  55. Rorden C, Brett M. Stereotaxic display of brain lesions. Behavioural Neurology. 2000;12(1):191–200. doi: 10.1155/2000/421719. [DOI] [PubMed] [Google Scholar]
  56. Rosci C, Chiesa V, Laiacona M, Capitani E. Apraxia is not associated to a disproportionate naming impairment for manipulable objects. Brain and Cognition. 2003;53(1):412–415. doi: 10.1016/s0278-2626(03)00156-8. [DOI] [PubMed] [Google Scholar]
  57. Sanes JN, Schieber MH. Orderly somatotopy in primary motor cortex: Does it exist? NeuroImage. 2001;13(1):968–974. doi: 10.1006/nimg.2000.0733. [DOI] [PubMed] [Google Scholar]
  58. Sato M, Mengarelli M, Riggio L, Gallese V, Buccino G. Task related modulation of the motor system during language processing. Brain and Language. 2008;105(2):83–90. doi: 10.1016/j.bandl.2007.10.001. [DOI] [PubMed] [Google Scholar]
  59. Scorolli C, Borghi AM. Sentence comprehension and action: Effector specific modulation of the motor system. Brain Research. 2006;1130(1):119–124. doi: 10.1016/j.brainres.2006.10.033. [DOI] [PubMed] [Google Scholar]
  60. Schubotz RI, von Cramon DY. Functional–anatomical concepts of human premotor cortex: Evidence from fMRI and PET studies. NeuroImage. 2003;20(1):S120–S131. doi: 10.1016/j.neuroimage.2003.09.014. [DOI] [PubMed] [Google Scholar]
  61. Sharma N, Pomeroy VM, Baron J-C. Motor imagery. A backdoor to the motor system after stroke? Stroke. 2006;37(1):1941–1952. doi: 10.1161/01.STR.0000226902.43357.fc. [DOI] [PubMed] [Google Scholar]
  62. Sharma N, Simmons LH, Jones S, Day DJ, Carpenter A, Pomeroy VM, et al. Motor imagery after subcortical stroke: A functional magnetic resonance imaging study. Stroke. 2009;40(1):1315–1324. doi: 10.1161/STROKEAHA.108.525766. [DOI] [PubMed] [Google Scholar]
  63. Shtyrov Y, Hauk O, Pulvermüller F. Distributed neuronal networks for encoding category-specific semantic information: The mismatch negativity to action words. European Journal of Neuroscience. 2004;19(4):1083–1092. doi: 10.1111/j.0953-816x.2004.03126.x. [DOI] [PubMed] [Google Scholar]
  64. Siri S, Tettamanti M, Cappa SF, Della Rosa P, Saccuman C, Scifo P, et al. The neural substrate of naming events: Effects of processing demands but not of grammatical class. Cerebral Cortex. 2008;18(1):171–177. doi: 10.1093/cercor/bhm043. [DOI] [PubMed] [Google Scholar]
  65. Székely A, Bates E. Objective visual complexity as a variable in studies of picture naming. CRL Newsletter. 2000 Jul12(2) [Google Scholar]
  66. Székely A, D’Amico S, Devescovi A, Federmeier K, Herron D, Iyer G, et al. Timed action and object naming. Cortex. 2005;41(1):7–26. doi: 10.1016/s0010-9452(08)70174-6. [DOI] [PubMed] [Google Scholar]
  67. Tettamanti M, Buccino G, Saccuman MC, Gallese V, Danna M, Scifo P, et al. Listening to action-related sentences activated fronto-parietal motor circuits. Journal of Cognitive Neuroscience. 2005;17(2):273–281. doi: 10.1162/0898929053124965. [DOI] [PubMed] [Google Scholar]
  68. Tomasino B, Fink GR, Sparing R, Dafotakis M, Weiss PH. Action verbs and the primary motor cortex: A comparative TMS study of silent reading, frequency judgments, and motor imagery. Neuropsychologia. 2008;46(1):1915–1926. doi: 10.1016/j.neuropsychologia.2008.01.015. [DOI] [PubMed] [Google Scholar]
  69. Tranel D, Kemmerer D, Adolphs R, Damasio H, Damasio AR. Neural correlates of conceptual knowledge for actions. Cognitive Neuropsychology. 2003;20(1):409–432. doi: 10.1080/02643290244000248. [DOI] [PubMed] [Google Scholar]
  70. Tremblay C, Robert M, Pascual-Leone A, Lepore F, Nguyen DK, Carmant L, et al. Action observation and execution: Intracranial recordings in a human subject. Neurology. 2004;63(1):937–938. doi: 10.1212/01.wnl.0000137111.16767.c6. [DOI] [PubMed] [Google Scholar]
  71. Turella L, Pierno AC, Tubaldi F, Castiello U. Mirror neurons in humans: Consisting or confounding evidence? Brain and Language. 2009;108(1):10–21. doi: 10.1016/j.bandl.2007.11.002. [DOI] [PubMed] [Google Scholar]

RESOURCES