Changing stroke rehab and research worldwide now.Time is Brain! trillions and trillions of neurons that DIE each day because there are NO effective hyperacute therapies besides tPA(only 12% effective). I have 523 posts on hyperacute therapy, enough for researchers to spend decades proving them out. These are my personal ideas and blog on stroke rehabilitation and stroke research. Do not attempt any of these without checking with your medical provider. Unless you join me in agitating, when you need these therapies they won't be there.

What this blog is for:

My blog is not to help survivors recover, it is to have the 10 million yearly stroke survivors light fires underneath their doctors, stroke hospitals and stroke researchers to get stroke solved. 100% recovery. The stroke medical world is completely failing at that goal, they don't even have it as a goal. Shortly after getting out of the hospital and getting NO information on the process or protocols of stroke rehabilitation and recovery I started searching on the internet and found that no other survivor received useful information. This is an attempt to cover all stroke rehabilitation information that should be readily available to survivors so they can talk with informed knowledge to their medical staff. It lays out what needs to be done to get stroke survivors closer to 100% recovery. It's quite disgusting that this information is not available from every stroke association and doctors group.

Showing posts with label body schema. Show all posts
Showing posts with label body schema. Show all posts

Thursday, January 27, 2022

Multiple representations of the body schema for the same body part

 With different representations ask your doctor EXACTLY  how this can be used to recover movement.

Multiple representations of the body schema for the same body part

See all authors and affiliations

  1. Edited by Ranulfo Romo, Instituto de Fisiologia Celular, Universidad Nacional Autonoma de Mexico, Mexico City, Mexico; received July 5, 2021; accepted December 1, 2021

Significance

Accurate motor control depends on maps of the body in the brain, called the body schema. Disorders of the body schema cause motor deficits. Although we often execute actions with different motor systems such as the eye and hand, how the body schema operates during such actions is unknown. In this study, participants simultaneously directed eye and hand movements to the same body part. These two movements were found to be guided by different body maps. This finding demonstrates multiple motor system–specific representations of the body schema, suggesting that the choice of motor system toward one’s body can determine which of the brain’s body maps is observed. This may offer a new way to visualize patients’ body schema.

Abstract

Purposeful motor actions depend on the brain’s representation of the body, called the body schema, and disorders of the body schema have been reported to show motor deficits. The body schema has been assumed for almost a century to be a common body representation supporting all types of motor actions, and previous studies have considered only a single motor action. Although we often execute multiple motor actions, how the body schema operates during such actions is unknown. To address this issue, I developed a technique to measure the body schema during multiple motor actions. Participants made simultaneous eye and reach movements to the same location of 10 landmarks on their hand. By analyzing the internal configuration of the locations of these points for each of the eye and reach movements, I produced maps of the mental representation of hand shape. Despite these two movements being simultaneously directed to the same bodily location, the resulting hand map (i.e., a part of the body schema) was much more distorted for reach movements than for eye movements. Furthermore, the weighting of visual and proprioceptive bodily cues to build up this part of the body schema differed for each effector. These results demonstrate that the body schema is organized as multiple effector-specific body representations. I propose that the choice of effector toward one’s body can determine which body representation in the brain is observed and that this visualization approach may offer a new way to understand patients’ body schema.

Since a classic work of Head and Holmes in the early 1900s (1), it has been widely accepted that purposeful motor actions rely on a spatial representation of the body in the brain, called the body schema (26). Without the body schema, we would be unable to accurately and safely control our body parts. Indeed, impairment of the body schema leads to a variety of disorders ranging from motor dysfunction to delusions that the affected body part belongs to another person (7, 8). The classic notion of the body schema had long been used to describe a representation of the location of body parts in external space derived from information about body posture specified by afferent signals (i.e., proprioceptive signals) and from information about efferent copies of motor commands (911). However, current views on the body schema suggest that the ability to localize body parts in external space requires not only afferent and efferent information but also stored information about the body’s metric properties, such as body part size and shape, because no afferent and efferent signals directly inform the brain about the metric properties of body parts (12, 13). This stored body metric information is provided by an implicit body representation in the brain (12). Such a metric representation serves not only perception but also action (13). Based on the current views, the body schema can be defined as a representation of the location of body parts in space that is constructed by combining afferent and efferent information with stored information about the body metrics. Moreover, the body schema has been suggested to be involved in a global adjustment through afferent signals from various body parts (14). Indeed, afferent signals coming from all body parts, such as the eye and foot, function together in modulating motor control (15, 16). Taken together, the body schema contains the spatial configuration of the body used for the guidance of action and functions in a global way.

The accumulated research literature indicates that the body schema is constructed based on bodily signals from multiple sensory sources such as vision and proprioception (5, 6, 1720). The spatial and temporal congruence of multisensory bodily signals from one’s body parts generates a single estimate of the body’s location (2126). This is referred to as multisensory integration (2729). The process underlying multisensory integration determines how much a given sensory modality contributes to the final estimate relative to a different sensory modality in a statistically optimal way (3032). Supporting this multisensory integration model, many demonstrations show that the weighting of each sensory source of bodily signals varies with its reliability (3335). The importance of multisensory integration for the body schema is illustrated in a cross-modal effect in which the position sense of one’s hand is influenced by vision of an artificial hand (36).

The body schema is normally assumed to be a common body representation supporting all types of motor actions, and numerous studies have typically used a single motor task to explore aspects of the body schema (17, 19, 3740). However, we often take multiple simultaneous actions with different effectors, such as the eye and hand, in our daily lives. In recent years, growing evidence has indicated that, when multiple actions are performed with different types of effectors, the action for each effector is guided by a different spatial representation of the outside world in the brain, such as motor responses to visual motion (41), localization of moving objects (42), and allocation of spatial attention (43, 44). These results suggest that there are multiple spatial maps of the outside world in the brain and that each of these spatial maps guides a different effector. This is surprising because it is thought that the spatial maps are represented in a common reference frame (i.e., eye-centered coordinates) regardless of whether the action is an eye movement or a reach movement (4550). However, how the body schema operates during multiple actions with different effectors is unknown. Here, I systematically investigate the body schema mediating the spatial configuration of the hand when simultaneous eye and reach movements are made to that hand. The present results demonstrate that the body schema is organized as multiple effector-specific representations of the body.

The body schema contributes to the planning of motor actions toward one’s body parts. It seems natural that the planning of motor actions toward one’s body parts would use the same bodily information that allows us to perceive those body parts. However, bodily information has been suggested to undergo independent processing when used for ballistic motor responses as opposed to perceptual judgments (19, 51). Based on this finding, I developed a technique to measure the body schema during multiple motor actions by having participants make ballistic motor responses toward landmarks on a hand. The distance between the judged locations of two adjacent landmarks on the hand (e.g., the tip and knuckle of a single finger) depends only on the represented length of the body segment connecting them. Other sources of error, such as misjudgments of the knuckle angle, affect localization error for a single landmark (e.g., the distance between actual and judged locations at the tip of the finger) but preserve the relative positions of the landmarks. Thus, the body schema was isolated and measured by having participants make simultaneous eye and reach movements toward the location of 10 landmarks on their hand. By comparing the landing positions of eye and reach at these landmarks regardless of their true positions (Fig. 1C), I analyzed the internal spatial configuration of the hand representations for the eye and reach landing positions. The distances between these motor judgments for each effector are different from either constant or variable error of localization and allow me to estimate the internal structural representation of the body schema of the hand. This behavioral measurement of the body schema was combined with the cross-modal effect of a computer-generated hand on proprioceptive judgments (36). By using this effect, I was able to investigate whether the weighting of visual and proprioceptive cues to the location of the landmarks differed between eye and reach landing positions.

Results

Experiment 1: With Vision of a Hand.

Participants placed their right hand palm down under a transparent board and wore a head-mounted display (HMD) that displayed visual stimuli in stereoscopic three dimensions (3D) (Fig. 1A). The HMD showed a realistic life-sized computer graphics (CG) hand (Fig. 1D). The CG hand overlapped the participant’s unseen right hand in the virtual environment and was configured similarly to the participant’s actual hand. While participants viewed the CG hand, they made simultaneous eye and reach (left forefinger) movements toward the location of 10 landmarks on their hidden right hand (the knuckles and tips of each finger). In such coordinated eye and reach movements, eye movement onset preceded hand movement onset (Fig. 1B; see SI Appendix, Fig. S1 for more details), as confirmed by previous studies (5254). Based on a previous study (12), comparing the eye or reach landing position of different landmarks allowed me to build a spatial map of the mental representation of hand shape (i.e., hand map), which could be then compared with the actual hand shape. Fig. 1C shows an example: The targeted locations of the index fingertip and knuckle were used to calculate the represented index finger length (dotted red line) for comparison with its actual length (dotted black line). Before and after each block, a picture was taken to record the actual hand shape and to ensure that the hand had not moved.

To assess finger length, the distance between the average landing positions of each knuckle and fingertip was calculated from the thumb to the little finger. These distances were then averaged to estimate overall finger length. The estimated overall finger length underestimated the actual length for both eye and reach landing positions [t (11) = −5.75, P < 0.001, and t (11) = −7.49, P < 0.0001, respectively; Fig. 2A; see SI Appendix, Fig. S2 for more details], which is consistent with the results of the previous study (12). Intriguingly, the overall underestimation of finger length significantly differed between the eye and reach landing positions [paired t test, t (11) = 4.39, P < 0.01; Fig. 2A].

Fig. 2.

Percent overall overestimation of finger lengths and spacing between knuckles for experiments 1 through 4. (A) Finger lengths. The distance between the landing locations of each knuckle and fingertip was calculated to estimate the represented finger length, and the estimated finger length was averaged from the thumb to the little finger. (B) Knuckle spacings. The distance between pairs of adjacent knuckles was calculated as for finger length, and the estimated knuckle spacing was averaged among the index–thumb, the middle–index, the ring–middle, the little–ring, and the index–little knuckles. The dark green and light green symbols represent eye and reach movements, respectively. Results are the mean ± SE. *P < 0.05; **P < 0.01; ***P < 0.005; n.s.: not significant.

To assess hand width, the distance between pairs of adjacent knuckles was calculated as for finger length. These distances were then averaged to estimate overall knuckle spacing. In contrast to the overall underestimation of finger length, strong overall overestimation of knuckle spacing was observed [t (11) = 4.59, P < 0.001, for eye landing positions and t (11) = 6.62, P < 0.0001, for reach landing positions; Fig. 2B; see SI Appendix, Fig. S2 for more details], which is consistent with the results of previous work (12). Overall overestimation of knuckle spacing significantly differed between the eye and reach landing positions [paired t test, t (11) = −3.14, P < 0.01; Fig. 2B].

To assess the shape of the hand map in detail, generalized Procrustes superimposition (GPS) (55) was used to compare the actual configuration of landmarks from each participant’s right hand with the internal representation based on eye and reach landing positions (Fig. 3 A and B). GPS removes differences in location, rotation, and scale and thereby highlights differences in shape (55, 56). Analysis of these data indicated significant differences in mean shape between the actual hand and the resulting hand map for eye landing positions [Bonferroni-corrected Goodall’s F test: Goodall’s F(16, 352) = 5.50, P < 0.0001; Fig. 3A; see SI Appendix, SI Materials and Methods for further details] and reach landing positions [Goodall’s F(16, 352) = 17.75, P < 0.0001; Fig. 3B]. Although the shape of the hand map was distorted for both eye and reach landing positions, the shape was more similar to the actual shape of the hand in eye landing positions than in reach landing positions. The mean shape of the hand map significantly differed between eye and reach landing positions [Goodall’s F(16, 352) = 5.56, P < 0.0001].

Fig. 3.

GPS of landmark positions for actual hands (black dots/solid lines) and the represented hand shape inferred from eye and reach movements (white dots/dotted lines). The solid line indicates the mean shape of the actual hand. The dotted line indicates the mean shape of the represented hand shape. (A and B) Experiment 1, (C and D) experiment 2, (E and F) experiment 3, and (G and H) experiment 4. The dark green and light green frames represent eye and reach movements, respectively.

One might argue that the difference in measured positions between eye and reach movements might reflect differential times of motor actions, rather than the difference in body representations, because eye and reach movements do not have the same speed or the same landing time (see SI Appendix, SI Results for details; SI Appendix, Fig. S3). However, this is unlikely. The initial movement end point was used to calculate the landing positions of the movement for each of the eye and reach movements. Moreover, I confirmed that there were significant differences in the overestimation of finger lengths and knuckle spacings between eye and reach, even though reach movements had latencies comparable to those of eye movements (see SI Appendix, SI Results for details; SI Appendix, Fig. S4). Thus, the difference in measured positions between eye and reach reflects a difference in the spatial representations of the body rather than the differential starting times of motor actions.

Experiment 2: Rotated Posture.

The results of experiment 1 could potentially reflect either a foreshortening of perspective in the near–far axis or motor biases in trunk-based coordinates for motor responses. To address these issues, a second experiment was conducted in which participants’ hands and the CG hand were rotated 90° counterclockwise relative to their trunk so that the fingers were pointing toward the left (Fig. 1E). If any effects independent of the hand map reproduce the results of experiment 1, these effects should be reversed in the rotated posture relative to the original posture used in experiment 1 (i.e., the fingers pointing away from the trunk): extended finger lengths and narrow hand widths. However, experiment 2 showed almost identical estimated finger lengths and hand widths in the rotated posture to those in the original posture. Finger length was underestimated overall even in the rotated posture [t (11) = −7.08, P < 0.0001, for eye landing positions and t (11) = −7.57, P < 0.0001, for reach landing positions; Fig. 2A; see SI Appendix, Fig. S2 for more details] and significantly differed between eye and reach landing positions [paired t test, t (11) = 4.64, P < 0.001; Fig. 2A]. Knuckle spacing was overestimated overall [t (11) = 3.92, P < 0.01, for eye landing positions and t (11) = 5.82, P < 0.001, for reach landing positions; Fig. 2B; see SI Appendix, Fig. S2 for more details] and significantly differed between eye and reach landing positions [paired t test, t (11) = −2.41, P < 0.05; Fig. 2B]. Analysis of GPS data revealed significant differences in the mean shape between the actual hand and the resulting hand map for eye landing positions [Goodall’s F(16, 352) = 4.57, P < 0.0001; Fig. 3C] and reach landing positions [Goodall’s F(16, 352) = 16.88, P < 0.0001; Fig. 3D]. As in experiment 1, the shape of the hand map was more similar to the actual shape of the hand in eye landing positions than in reach landing positions. The mean shape of the hand map significantly differed between eye and reach landing positions [Goodall’s F(16, 352) = 5.97, P < 0.0001]. Thus, these results demonstrate that the effects observed in the present study reflect the hand map in the brain rather than biases in head- or trunk-based coordinates for motor responses or in a foreshortening of perspective in the near–far axis.

Experiment 3: Without Vision of a Hand.

To investigate whether viewing a hand affected the shape of the hand map in experiments 1 and 2, the CG hand was not presented to participants and they were asked to make concurrent eye and reach movements to the location of 10 landmarks on their hidden right hand (Fig. 1F). Under these conditions, finger length was underestimated overall [t (17) = −17.36, P < 0.0001, for eye landing positions and t (17) = −16.06, P < 0.0001, for reach landing positions; Fig. 2A; see SI Appendix, Fig. S2 for more details] but was not significantly different between eye and reach landing positions [paired t test, t (17) = −0.30, P = 0.76; Fig. 2A]. Knuckle spacing was overestimated overall [t (17) = 15.13, P < 0.0001, for eye landing positions and t (17) = 8.13, P < 0.0001, for reach landing positions; Fig. 2B; see SI Appendix, Fig. S2 for more details] but was not significantly different between eye and reach landing positions [paired t test, t (17) = 0.058, P = 0.95; Fig. 2B]. Analysis of GPS data revealed significant differences in the mean shape between the actual hand and the hand map for eye landing positions [Goodall’s F(16, 544) = 47.28, P < 0.0001; Fig. 3E] and reach landing positions [Goodall’s F(16, 544) = 25.21, P < 0.0001; Fig. 3F]. However, there was no significant difference in the mean shape of the hand map between eye and reach landing positions [Goodall’s F(16, 544) = 1.22, P = 0.25]. These results indicate that hand viewing is required to generate the difference in the shape of the hand map between eye and reach landing positions.

Experiment 4: With a Wood-Like Rectangle.

To investigate whether the difference in the shape of the hand map between eye and reach movements is specific to vision of a hand, the CG hand was visually replaced with a computer-generated wood-like rectangle that spatially overlapped the participant’s unseen right hand, as in experiment 1 (Fig. 1G). Finger length was underestimated overall [t (11) = −7.74, P < 0.0001, for eye landing positions and t (11) = −8.70, P < 0.0001, for reach landing positions; Fig. 2A; see SI Appendix, Fig. S2 for more details] but was not significantly different between eye and reach landing positions [paired t test, t (11) = 1.37, P = 0.20; Fig. 2A]. Knuckle spacing was overestimated overall [t (11) = 7.29, P < 0.0001, for eye landing positions and t (11) = 5.26, P < 0.001, for reach landing positions; Fig. 2B; see SI Appendix, Fig. S2 for more details] but was not significantly different between eye and reach landing positions [paired t test, t (11) = −0.48, P = 0.64; Fig. 2B]. Analysis of GPS data identified significant differences in the mean shape between the actual hand and the hand map for eye landing positions [Goodall’s F(16, 352) = 11.96, P < 0.0001; Fig. 3G] and reach landing positions [Goodall’s F(16, 352) = 18.59, P < 0.0001; Fig. 3H]. However, there was no significant difference in the mean shape of the hand map between eye and reach landing positions [Goodall’s F(16, 352) = 1.29, P = 0.20]. These results indicate that the difference in the shape of the hand map between eye and reach movements is specific to vision of a hand.

Finger Lengths and Knuckle Spacings for Eye and Reach Are Different in Relation to Perception.

Since separate somatosensory processes have been proposed for action and perception (51), I investigated whether the finger lengths and knuckle spacings for eye and reach are different from those for perception. Participants were instructed to judge the location of landmarks on their invisible right hand by moving a visual pointer in the virtual environment with a trackball controlled by their left hand (perceptual localization task; SI Appendix, SI Materials and Methods). In the “with CG hand” condition, the participants saw the CG hand, as in experiment 1. In the “without CG hand” condition, they saw a gray surface alone without the CG hand, as in experiment 3. I found that the finger lengths and knuckle spacings for reach differed from those for perception [Bonferroni-corrected paired t tests, t (11) = 5.23, P < 0.01, for finger lengths and t (11) = 2.71, P < 0.05, for knuckle spacings; Fig. 4 A and B; see SI Appendix, SI Materials and Methods and SI Results for details; SI Appendix, Fig. S5). In contrast, the finger lengths and knuckle spacings for eye were almost the same as those for perception [t (11) = 2.13, P = 0.11, for finger lengths and t (11) = 0.65, P = 0.99, for knuckle spacings; Fig. 4 A and B; see SI Appendix, SI Results for details; SI Appendix, Fig. S5]. Furthermore, participants answered questionnaire items to rate perceptual aspects of the CG hand (ownership rating task; see SI Appendix, SI Materials and Methods, Table S1, and SI Results for details; SI Appendix, Fig. S6) (25, 26, 35, 57). I found that the overestimation of the finger lengths and knuckle spacings was significantly correlated with the strength of sense of body ownership over the CG hand for eye and percept but not for reach (Fig. 4 CH; see SI Appendix, SI Materials and Methods and SI Results for details). Note that it is suggested later that a different hand representation is used for eye movements and perception (see Correlations across Individuals between Hand Maps). Thus, these results point to a difference in the body map used to guide reach movements as opposed to that used to guide saccadic eye movements.

Fig. 4.

Comparison of finger lengths and knuckle spacings among eye, reach, and percept. (A) Percent overall overestimation of finger lengths for the “with CG hand” and “without CG hand” conditions. (B) Percent overall overestimation of knuckle spacings for the “with CG hand” and “without CG hand” conditions. The dark green, light green, and dotted black lines represent eye, reach, and percept, respectively. Results are the mean ± SE. (C–H) Regression plot (and 95% confidence bands) of the strength of the sense of body ownership and overestimation of finger lengths and knuckle spacings for the “with CG hand” condition. (CE) Percent overall overestimation of finger lengths for eye, reach, and percept. (FH) Percent overall overestimation of knuckle spacings for eye, reach, and percept. n.s.: not significant.

Hand Maps for Eye and Reach Are Different from the Conscious Body Image.

To investigate whether the hand maps for eye and reach are dissociated from a conscious body image, I used Napier’s shape index, which quantifies the ratio of hand width to length (58) (template-matching task; SI Appendix, SI Materials and Methods). I found differences in shape indices between the hand map for eye and the conscious body image and between the hand map for reach and the conscious body image [Bonferroni-corrected paired t tests, t (11) = 4.19, P < 0.01, for eye and t (11) = 5.55, P < 0.005, for reach; Fig. 5A; see SI Appendix, SI Results for details; SI Appendix, Fig. S7]. There was a significant difference in shape indices between eye and reach [t (11) = 3.38, P < 0.05; Fig. 5A]. Given that the hand maps for eye and reach were also different in relation to sense of body ownership as shown in Fig. 4 CH, this sense of body ownership might involve the conscious body image (19, 59). However, I found no correlation between the shape and ownership indexes (SI Appendix, Fig. S8), implying that measures based on sense of body ownership may not be measures of body image.

Fig. 5.

Shape indices (100 × width/length) quantifying the overall aspect ratio of the hand. (A) Shape indices for the actual hand, the conscious body image measured by template matching, the body map measured by eye, and the body map measured by reach. Results are the mean ± SEM. (B and C) Regression plot (and 95% confidence bands) of shape indices. (B) Eye and reach movements without the CG hand. (C) Eye movements and perception with the CG hand. *P < 0.05; **P < 0.01; ***P < 0.005; n.s.: not significant.

Correlations across Individuals between Hand Maps.

To determine whether a different hand representation is used between eye and reach, I analyzed the correlation across individuals between the shape indices, rather than the overestimation of finger lengths and knuckle spacings, for different effectors. The shape index is a better measure of the hand representation, because the overestimation of finger lengths and knuckle spacings shows the extent of distortion but does not represent the shape of the hand map itself. If the same hand representation is used between different effectors, there should be a correlation across individuals between the shape indices for these effectors (60). However, I found that there were no significant correlations between eye and reach under both the “with CG hand” and “without CG hand” conditions (rs = 0.16, n = 24, and P = 0.46 for the “with CG hand” condition and rs = 0.20, n = 30, and P = 0.30 for the “without CG hand” condition; see Fig. 5B and SI Appendix, Fig. S9 AD and SI Results for details). In particular, I found no significant correlation between the shape indices for eye and reach movements without the CG hand (Fig. 5B), even though the average distortions of the hand map were similar between eye and reach without the CG hand (SI Appendix, Fig. S9C). Thus, these results suggest that different hand representations are used for eye and reach movements regardless of whether visual information about the hand is available. On the other hand, there was a significant correlation between the normal (experiment 1) and rotated (experiment 2) postures for each of eye and reach (rs = 0.68, n = 12, and P < 0.05 for eye and rs = 0.81, n = 12, and P < 0.001 for reach; see SI Appendix, Fig. S9 EH and SI Results for details), suggesting that the same hand representation is used for the normal and rotated postures in the same effector. Furthermore, I found that there was no significant correlation between eye movements and perception (rs = −0.28, n = 12, and P = 0.38; Fig. 5C; see SI Appendix, Fig. S9 I and J and SI Results for details), suggesting that different hand representations are used even for eye movements and perception. This implies that the distortions of hand shape in eye movements are not indicative of distortions in the perceptual body image.

Discussion

The present study reveals that there are multiple representations of the body schema for the same body part. The body schema has been assumed to be a common body representation used to control all types of motor actions for more than a century (17, 19, 3740). Although many studies have suggested that the body schema is based on the process of multisensory integration (46, 1720), I believe that the current systematic investigation of this process during multiple motor actions with different effectors is unique. The present results show that the body schema is organized as multiple effector-specific body representations. These representations were measured through proprioceptive judgments of body parts. These proprioceptive judgments show remarkably large distortions of the represented body shape, which are thought to reflect the characteristics of the body schema (12, 13, 61). I found that the pattern of the distortions differs between saccadic eye movements and reach movements even when these two types of movements are simultaneously directed to the same body part location. This difference was observed when an artificial body part spatially overlapped the participant’s invisible body part but not when the artificial body part was replaced with a nonbody object. Moreover, the distortions of the represented body shape were not correlated between saccade and reach across individuals regardless of whether visual information about the artificial body part was available. These results provide clear evidence of a dissociation in body representations between different types of effectors and that these body representations differ in the weighting of visual and proprioceptive bodily cues.

What might be the mechanisms underlying such a dissociation in body representations between different types of effectors? Body representations are the spatial model of the body that the brain constructs based on the integration of information from multiple sensory modalities such as vision and proprioception (20, 62). Such body models can provide an estimate of where one’s body parts are in space (12). This estimate is generated by combining bodily cues in a way that the weights change flexibly according to the uncertainty surrounding the body’s location (31). For example, when vision is heavily weighted, an estimate of where one’s body part is in space relies more on vision than on proprioception. Recent studies examining the integration of visual and proprioceptive bodily information suggest that the weighting of each bodily cue depends on its reliability (33, 35). However, the present results demonstrate that, even though the same visual and proprioceptive bodily cues are theoretically available for saccade and reach, these cues are incorporated differently for the two movements. Thus, this finding indicates that the weighting of vision and proprioception differs for each effector even though the reliability of each bodily cue is the same across the effectors being used.

The present results suggest that there are two distinct body representations for action. One body representation is used to guide saccadic eye movements, whereas the other is used to guide reach movements. Are these body representations different from a body representation for perception? Separate somatosensory processes have been proposed for perception and action (51). For example, illusory displacement of the perceived location of a participant’s hidden hand toward a rubber hand presented in front of the participant, called proprioceptive drift, is reduced when tested with reach movements rather than perception (19). Consistent with this result, I found that the body representation for reach differed from that for perception. In contrast, the body representation for saccade was almost the same as that for perception. Furthermore, I found that the magnitude of the distortion of the body representation was significantly correlated with the strength of sense of body ownership over the CG hand for saccade and percept but not for reach. Although these results highlight the difference between the saccade and reach systems, the results indicate that both saccade and percept depend on a cross-modal effect of vision and proprioception on body ownership. However, I found that the distortions of the represented body shape were not correlated between saccade and percept across individuals. These findings suggest that dissociated processing of perception and action is a characteristic of both the saccade and reach systems.

The present results indicate that dissociable body representations are identified on motor tasks. Dissociation between body schema (a body representation used to control bodily movements) and body image (a body representation used to judge bodily properties) is well established (1, 4, 9). In the present study, participants controlled their saccade and reach movements toward their body parts. To carry out such motor tasks, the brain needs to have access to the body schema (1). For that reason, the body representations for saccade and reach should reflect the body schema. Indeed, I found that the body representations for saccade and reach are different from a conscious body image. These results suggest that the body representations for saccade and reach reflect the body schema and that the body schema can be further subdivided into at least two representations depending on the effector.

Which brain areas are responsible for the body representations for saccade and reach? Neuroimaging studies have suggested that the posterior parietal cortex (PPC), the ventral premotor cortex (PMV), and the extrastriate body area (EBA) are involved in the performance of motor actions (6365). The PPC contains distinct cortical areas, the intraparietal sulcus (IPS) and the superior parietal lobe (SPL). The IPS and SPL are selectively recruited during saccade and reach movements, respectively (63). Like the PPC, the PMV also contains distinct cortical areas, the saccade-evoking area (PMVe) and the reach-evoking area (PMVr) (64). Like the SPL and PMVr, the EBA also responds during reach movements (65). Interestingly, the IPS, SPL, EBA, and PMV have also been suggested to be involved in the body representations (66, 67). The IPS and PMV have been reported to encode the hand position by integrating visual and proprioceptive signals (66, 68). Moreover, activity in the PMV reflects individual differences in experienced artificial hand ownership (67, 69). Unlike the IPS and PMV, the SPL and EBA have been found to encode changes in proprioceptive hand position in the dark, although these regions also responded to the position of a visible computer-generated hand (65, 67). These findings suggest that the contribution of vision and proprioception to the hand representation is different between the saccade-related IPS and PMVe and the reach-related SPL and EBA, although this contribution seems to be similar between the PMVe and the PMVr. This view is supported by the present results. Indeed, the present results indicate that the estimate of the landmark positions on the hand relies more heavily on proprioception for reach movements than for saccade movements. Taken together, these findings suggest that the IPS and PMVe may be responsible for the body representation accessed by the saccade system and that the SPL and EBA may be responsible for the body representation accessed by the reach system.

Although a common spatial representation in the brain is often believed to guide different types of effectors toward the same goal (49), some evidence suggests that different effectors aiming for the same goal can show different spatial representations. Visual motion processing is carried out independently for manual and ocular following responses to visual motion (41). The spatial localization of moving objects differs between eye and hand movements (42). During the preparation of coordinated eye–hand movements, spatial attention is allocated independently to the targets of both movements (43, 44). These previous studies propose the view that spatial maps of the outside world in the brain dissociate between different types of effectors. The present findings support this view and demonstrate that such dissociations are present not only with spatial maps of the outside world but also with spatial maps of one’s body.

What are the different roles of the body representations accessed by the reach system and the saccade system? Distortions in hand representations have been reported to extend to objects (70). Interestingly, similar distortions to the hand representation were found for the representation of manipulable objects such as a mobile phone, but the pattern of distortions for nonmanipulable objects, such as a cactus with spines, differed from that for manipulable objects. This suggests that the pattern of distortions depends on the availability of motor functions. The present study indicates that this pattern of distortions also depends on the types of motor functions. Motor functions differ between reach and saccade. The reach system would allow us to interact with objects around us, whereas the saccade system would allow us to take advantage of foveal vision to judge whether our hands can safely interact with objects. I suggest that the body representations accessed by the reach system might be involved in facilitating body−object interactions, whereas those accessed by the saccade system might be less involved in manipulability of objects.

A visual distortion in HMDs is well established (71). However, this visual distortion in HMDs cannot explain the present results. The current study showed that there is a highly distorted representation of hand shape, shortened finger lengths, and wider hand widths using an HMD. If these results are due to the effects of the HMD, the results should be extended finger lengths and narrower hand widths (i.e., reversed) when the participant’s hand and the CG hand are rotated 90° counterclockwise relative to their trunk (experiment 2). However, experiment 2 showed that the estimated finger lengths and hand widths in the rotated posture are almost identical to those in the original posture shown in experiment 1. This suggests that the present results are not due to the effects of the HMD itself. Nevertheless, because the CG hand leads to a misestimation of depth perception in a virtual 3D environment, it may be better to run a similar study using mixed reality in which participants see their real hand. Future research is needed to examine this topic.

The present study has implications for methods designed to visualize the body representations of patients with motor paralysis. The rapid population aging of many developed countries, such as Japan, will sharply increase the number of patients with motor paralysis resulting from motor dysfunction and stroke. To overcome this issue, effective rehabilitation techniques need to be developed for patients with motor paralysis. Visualization of the patient’s body representations for action will help to build such an effective rehabilitation technique. The present results suggest that the choice of effectors toward one’s body can determine which body representation in the brain is observed. This approach to the visualization of multiple body representations might be useful for understanding abnormal body schemas in patients with motor paralysis or limb amputation.

More at link.

 

Thursday, May 26, 2016

Study of firefighters shows our body schema isn't always as flexible as we need it to be

How disrupted is your body schema post-stroke and what exactly is your doctor doing to correct it?
e.g.  left/right side neglect? foot drop?
http://digest.bps.org.uk/2016/05/study-of-firefighters-shows-our.html
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Your brain has a representation of where your body extends in space. It's how you know whether you can fit through a doorway or not, among other things. This representation – the "body schema" as some scientists call it – is flexible. For example if you're using a grabbing tool or swinging a tennis raquet, your sense of how far you can reach is updated accordingly. But there are limits to the accuracy and speed with which the body schema can be adjusted, as shown by an intriguing new study in Ecological Psychology about the inability of firefighters to adapt to their protective clothing.

Indeed, the researchers at the University of Illinois at Urbana-Champaign and the Illinois Fire Service Institute believe their findings may help explain some of the many injuries sustained by firefighters (of which there were over 65,000 in 2013 alone), and that they could have implications for training.

The participants were 24 firefighters (23 men) with an average age 29 and an average of 6 years experience in the job, all of whom were recruited through the University of Illinois Fire Service. The researchers led by Matthew Petrucci asked the participants to don the full protective kit, including bunker-style coat, helmet and breathing apparatus. As well as the weight and bulk of the gear affecting the participants'  ability to move freely, it also changed the participants' physical dimensions – for instance, the helmet added 21cm to their height, and the breathing apparatus added 21cm of depth to their body.

The researchers created three main obstacles designed to simulate situations in a real-life fire: a horizontal bar that the firefighters had to go under, a bar that they had to go over, and a vertical gap between a mock door and wall that they had to squeeze through. All of these were adjustable, and the participants' first task was to estimate what height bar they could manoeuvre over, what height they could manoeuvre under, and what width gap they could squeeze through. To make these judgments, the researchers adjusted the obstacles' in height or width, and for each setting the firefighters said whether they thought they could safely pass the obstacle.

For the next stage, the firefighters actually attempted to manoeuvre over, under or through the different obstacles, which were adjusted to make them progressively harder to complete. The idea was to find the lowest, highest and narrowest settings that the firefighters could pass through safely and quickly. To count as a safe passage, the firefighters had to avoid knocking off the delicately balanced horizontal bar for the over and under obstacles, and avoid touching their hands to the floor, or dumping their gear.

Despite having many years experience wearing protective gear and breathing apparatus, the results showed that there was little correspondence between the firefighters' judgments about the dimensions of the obstacles they could safely pass under, over or through, and their actual physical performance. In psychological jargon, the firefighters made repeated "affordance judgment errors", misperceiving the movements "afforded" to them by different environments.

The participants' judgments were most awry for passing under a horizontal bar – on average they thought they could pass under a bar that was 15cm lower than the height they could actually go under. Errors related to the over obstacle were a mix of over- and underestimations, and for the through obstacle 80 per cent of participants underestimated their ability by four to five cm – in other words, they thought they couldn't pass through, when actually they could. In a real life situation, this could lead to time wasting or unnecessary danger as they sought a more circuitous route.

The results suggest that the firefighters struggled to adjust their body schemas to account for their gear, and it's easy to see how this problem could lead to accidents in a burning building. It seems strange that they hadn't learnt to take account of their gear through experience, but in fact the converse was true – the more experienced firefighters made more errors. The researchers propose several explanations for this, including that specific experiences may be needed to recalibrate the body schema to specific obstacles. Also, the firefighters training in manoeuvring in their gear mostly comes at the start of their career and the benefits may have faded. Refresher training may be helpful, especially to learn one's changing capabilities with ageing.

The researchers said that their results were important because "affordance judgment errors made on a fireground could contribute to injuries attributed to contact with ceilings, doors, structural components of buildings, and other objects with slips, trips, and falls."

_________________________________ ResearchBlogging.org

Petrucci, M., Horn, G., Rosengren, K., & Hsiao-Wecksler, E. (2016). Inaccuracy of Affordance Judgments for Firefighters Wearing Personal Protective Equipment Ecological Psychology, 28 (2), 108-126 DOI: 10.1080/