松本 絵理子 | ![]() |
マツモト エリコ | |
大学院国際文化学研究科 グローバル文化専攻 | |
教授 | |
社会・自然科学関係 |
[査読有り]
研究論文(学術雑誌)
Observers can focus their attention on task-relevant items in visual search when they have prior knowledge about the target's properties (i.e., positive cues). However, little is known about how negative cues, which specify the features of task-irrelevant items, can be used to guide attention away from distractors and how their effects differ from those of positive cues. It has been proposed that when a distractor color is cued, people would first select the to-be-ignored items early in search and then inhibit them later. The present study investigated how the effects of positive and negative cues differ throughout the visual search process. The results showed that positive cues sped up the early stage of visual search and that negative cues led to initial selection for inhibition. We further found that visual search with negative cues was more inefficient than that with positive cues even at later stages, suggesting that sustained inhibition is needed throughout the visual search process. Taken together, the results indicate that positive and negative cues have different functions: prior knowledge about target features can weight task-relevant information at early stages of visual search, and negative cues are used more inefficiently even at later stages of visual search.
2018年10月, Acta psychologica, 190, 85 - 94, 英語, 国際誌[査読有り]
研究論文(学術雑誌)
Both visual and verbal information in working memory guide visual attention toward a memory-matching object. We tested whether: (a) visual and verbal representations have different effects on the deployment of attention; and (b) both types of representations can be used equally in a top-down manner. We asked participants to maintain a visual cue or a verbal cue at the beginning of each trial, and ended with a memory task to ensure that each cue was represented actively in working memory. Before the memory task, a visual search task appeared where validity was manipulated as valid, neutral, or invalid. We also manipulated the probability of valid trials (20%, 50%, and 80%), which had been told to the participants prior to the task. Consistent with earlier findings, attentional guidance by visual representations was modulated by the probability. We also found that this was true for verbal representations, and that these effects did not differ between representation types. These results suggest that both visual and verbal representations in working memory can be used strategically to control attentional guidance.
WILEY-BLACKWELL, 2017年01月, JAPANESE PSYCHOLOGICAL RESEARCH, 59 (1), 49 - 57, 英語[査読有り]
研究論文(学術雑誌)
[査読有り]
[査読有り]
People underestimate the numerosity of collections in which a few dots are connected in pairs by task-irrelevant lines. Such configural processing suggests that visual numerosity depends on the perceived scene segments, rather than on the perceived total area occupied by a collection. However, a methodology that uses irrelevant line connections may also introduce unnecessary distraction and variety, or obscure the perception of task-relevant items, given the saliency of the lines. To avoid such potentially confounding variables, we conducted four experiments where the line-connected dots were replaced with collinear inducers of Kanizsa-type illusory contours. Our participants had to compare two simultaneously presented collections and choose the more numerous one. Displays comprised c-shaped inducers and disks (Experiment 1), c-shaped inducers only (Experiments 2 and 4), or closed inducers (Experiment 3). One display always showed a 12- (Experiments 1-3) or 48-item reference pattern (Experiment 4); the other was a test pattern with numerosity varying between 9 and 15 (Experiments 1-3) or 36-60 items (Experiment 4). By manipulating the number of illusory contours in the test patterns, the level of connectedness increased or decreased respectively. The fitted psychometric functions revealed an underestimation that increased with the number of illusory contours in Experiments 1 and 2, but was absent in Experiments 3 and 4, where illusory contours were more difficult to perceive or larger numerosities were used. Results corroborate claims that visual numerosity estimation depends on segmented inputs, but only within moderate numerical ranges.
PERGAMON-ELSEVIER SCIENCE LTD, 2016年05月, Vision research, 122, 34 - 42, 英語, 国際誌[査読有り]
研究論文(学術雑誌)
Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations.
LIPPINCOTT WILLIAMS & WILKINS, 2016年03月23日, Neuroreport, 27 (5), 345 - 9, 英語, 国際誌[査読有り]
研究論文(学術雑誌)
[査読有り]
[査読有り]
[査読有り][招待有り]
研究論文(学術雑誌)
Several physiological studies in cats and monkeys have reported that the spatial frequency (SF) tuning of visual neurons varies depending on the luminance contrast and size of stimulus. However, comparatively little is known about the effect of changing the stimulus contrast and size on SF tuning in human perception. In the present study, we investigated the effects of stimulus size and luminance contrast on human SF tuning using the subspace-reverse-correlation method. Measuring SF tunings at six different stimulus sizes and three different luminance contrast conditions (90%, 10%, and 1%), we found that human perception exhibits significant stimulus-size-dependent SF tunings. At 90% and 10% contrast, participants exhibited relative SF tuning (cycles/image) rather than absolute SF tuning (cycles/°) at response peak latency. On the other hand, at 1% contrast, the magnitude of the size-dependent-peak SF shift was too small for strictly relative SF tuning. These results show that human SF tuning is not fixed, but varies depending on the stimulus size and contrast. This dependency may contribute to size-invariant object recognition within an appropriate contrast rage.
ASSOC RESEARCH VISION OPHTHALMOLOGY INC, 2014年11月20日, Journal of vision, 14 (13), 23 - 23, 英語, 国際誌[査読有り]
研究論文(学術雑誌)
研究論文(大学,研究機関等紀要)
研究論文(その他学術会議資料等)
The search performance for targets is improved when the targets appear in a specific location more frequently than in other locations. Although this phenomenon, called the "probability cueing effect," has been reported in past studies, it is unclear whether probability cueing is driven by statistical learning and/or intertrial facilitation of the target location. We investigated the underlying mechanisms for probability cueing effects by manipulating probabilities and repetitions of the target appearance at each target location. The first experiment demonstrated that the reaction time benefits of both statistical learning and intertrial facilitation contributed to the probability cueing effect. In contrast, the second and third experiments demonstrated that the probability cueing effect did not occur when target location repetitions on consecutive trials were fully or partially restricted. Also, any intertrial facilitation effects disappeared if there were more than one intervening trials. These results suggest that consecutive target location repetitions throughout the experiment facilitate learning of the target location probability.
Elsevier BV, 2012年11月15日, Vision research, 73, 23 - 9, 英語, 国際誌[査読有り]
研究論文(学術雑誌)
[査読有り]
研究論文(学術雑誌)
The reversal of the retinal image using prism spectacles induces disruption of sensory-motor coordination. Although several studies report that harmonious visuomotor behavior is recovered after prism adaptation, the mechanism involved in the adaptation is largely unknown. Here we study large-scale visual plasticity between left and right hemifields using Gabor patches and left-right reversed prisms. Experiments were carried out for 5 days. Before the prism adaptation, the long-range interaction was achieved by a temporal cueing method. Temporally primed visual signals (peripheral crosses at 7.2 deg., duration=100ms) preceded vertically collinear 3 Gabors by 300-600ms. The Gabor stimuli (sigma=lambda=0.2 deg., 100ms) were presented binocularly at 3 deg. leftward from the central fixation spot. The flanker (C=0.4)-to-target distance was 6 lambda. The practice with temporal cues for 30min generated extended long-range facilitation to 9-12 lambda over days (threshold reduction=0.23±0.08 log units 5 subjects). Before the adaptation, no transfer was observed at the opposite visual field. After two days of adaptation, the extended long-range facilitation was found not only at the practiced visual field but also at the opposite side (distance=3 deg., 0.14±0.05 log units, 2 subjects). This transfer persisted over the subsequent 3 days of adaptation and preserved after putting off the prisms. No transfer was found using up-down reversed prisms (1 subject). Control observers without prisms (2 subjects) showed no transfer. The transfer of the long-range interaction across the hemifield by prism adaptation demonstrates large-scale plasticity in early visual system induced by reversed retinotopic input. There is no commissural connection in V1 between the practiced area (left visual field) and the tested area (right visual field), thus the results suggest that the learning effect transferred through higher cortices (i.e. the parietal cortex) and projected backward to V1 during the adaptation.
2003年, Journal of Vision, 3 (9), 166, 英語[査読有り]
研究論文(学術雑誌)
[査読有り]
If the probability that a target item in a visual task is presented at a given location or with a given feature is high, the reaction times for biased targets are shorter than those for low probability targets. However, the relationship between manipulation of probabilistic information and this probability effect is unclear. In this study, we investigated the effects of the spatial and nonspatial probabilities associated with the onset of targets on attentional deployment. When targets appeared at high probability locations, reaction times for target discrimination were faster than those that appeared at less likely locations (Experiment 1). However, such a probability advantage did not appear when the targets' appearances were associated with the shapes of the placeholders, regardless of their locations (Experiment 2a). The probability effect reoccurred when participants were informed of the nonspatial probabilistic manipulation (Experiment 2b). These results suggest that the spatial probability is effective as an attentional cue without awareness, whereas the nonspatial probability is not.
日本基礎心理学会, 2011年09月, 基礎心理学研究, 30 (1), 56 - 64, 日本語There has been much controversy around the relationship between anxiety and attentional processing of threat-related information. The purpose of this study was to examine how threatening facial expressions affect attentional processing, according to the level of trait anxiety. Through visual search tasks, two different components of attentional bias to threat were investigated: engagement and disengagement of attention from an angry face. Two main results were found. First, reaction times (RTs) were slower in detecting the absence of a discrepant face in the all angry-display conditions rather than other expression conditions; however, there was no difference between anxiety groups. Second, the difference in search efficiency for the angry versus happy target was significant within the high-anxiety group but not within the low-anxiety group. The results suggest that the detection process for angry faces is more efficient for highly anxious people. On the other hand, the time to disengage attention from angry faces was not associated with anxiety level. Copyright (C) 2010 John Wiley & Sons, Ltd.
JOHN WILEY & SONS LTD, 2010年04月, APPLIED COGNITIVE PSYCHOLOGY, 24 (3), 414 - 424, 英語To investigate the phonological influences on the lexicosemantic process with a strong orthographic constraint, we used kanji (morphogram) homophone words and measured, using magnetoencephalography, the neural activities during the silent reading of prime-target pairs. The primes were phonologically the same as or different from the targets or pseudocharacters. The neural activities in the left posterior temporal and inferior parietal areas became weaker with phonological repetition. Furthermore, stronger activities for the different condition in the left anterior temporal area and for the same condition in the left inferior frontal cortex, respectively, suggest the roles of these areas of the brain in the semantic processing of words and in the selection of appropriate meanings. We conclude that phonological information affects the lexicosemantic process even with a strong orthographic constraint.
LIPPINCOTT WILLIAMS & WILKINS, 2007年11月19日, Neuroreport, 18 (17), 1775 - 80, 英語, 国際誌To investigate the phonological influences on the lexicosemantic process with a strong orthographic constraint, we used kanji (morphogram) homophone words and measured, using magnetoencephalography, the neural activities during the silent reading of prime-target pairs. The primes were phonologically the same as or different from the targets or pseudocharacters. The neural activities in the left posterior temporal and inferior parietal areas became weaker with phonological repetition. Furthermore, stronger activities for the different condition in the left anterior temporal area and for the same condition in the left inferior frontal cortex, respectively, suggest the roles of these areas of the brain in the semantic processing of words and in the selection of appropriate meanings. We conclude that phonological information affects the lexicosemantic process even with a strong orthographic constraint.
LIPPINCOTT WILLIAMS & WILKINS, 2007年11月, NEUROREPORT, 18 (17), 1775 - 1780, 英語研究発表ペーパー・要旨(国際会議)
Previous psychological experiments have indicated the existence of a visual-proprioceptive interaction in spatial stimulus-response compatibility (SSRC) tasks, but there is little specific information on the neural basis of such interaction in humans. Using functional magnetic resonance imaging (fMRI), we compared the neural activity associated with two different aspects of spatial coding: the coding of the "internal" spatial position of motor-response effectors (i.e., the position of body parts) as obtained through proprioception, and the coding of "external" positions, i.e., the positions of visual stimuli. A 2 x 2 factorial design was used to investigate the spatial compatibility (incompatible versus compatible) between a visual stimulus and hand position (crossed versus uncrossed). The subjects were instructed to respond to stimuli presented to the right or left visual field with either the ipsilateral (compatible condition) or the contralateral hand (incompatible condition). The incompatible condition produced stronger activation in the bilateral superior parietal lobule, inferior parietal lobule, and bilateral superior frontal gyrus than the compatible condition. The crossed-hand condition produced stronger activation in the bilateral precentral gyrus, superior frontal gyrus, superior parietal lobule, and superior temporal gyrus than the uncrossed-hand condition. These results suggest that activity in the frontal-parietal regions is related to two functions: (1) representation of the visual stimulus-motor response spatial configuration in an SSRC task, and (2) integration between external visual and internal proprioceptive sensory information. The activation in the superior temporal gyrus was not affected by the visual stimulus-motor response spatial configuration in an SSRC task; rather, it was affected by the crossed-hand posture. Thus, it seems to be related to representing internal proprioceptive sensory information necessary to carry out motor actions.
SPRINGER, 2004年09月, Experimental brain research, 158 (1), 9 - 17, 英語, 国際誌Previous psychological experiments have indicated the existence of a visual-proprioceptive interaction in spatial stimulus-response compatibility (SSRC) tasks, but there is little specific information on the neural basis of such interaction in humans. Using functional magnetic resonance imaging (fMRI), we compared the neural activity associated with two different aspects of spatial coding: the coding of the "internal" spatial position of motor-response effectors (i.e., the position of body parts) as obtained through proprioception, and the coding of "external" positions, i.e., the positions of visual stimuli. A 2x2 factorial design was used to investigate the spatial compatibility (incompatible versus compatible) between a visual stimulus and hand position (crossed versus uncrossed). The subjects were instructed to respond to stimuli presented to the right or left visual field with either the ipsilateral (compatible condition) or the contralateral hand (incompatible condition). The incompatible condition produced stronger activation in the bilateral superior parietal lobule, inferior parietal lobule, and bilateral superior frontal gyrus than the compatible condition. The crossed-hand condition produced stronger activation in the bilateral precentral gyrus, superior frontal gyrus, superior parietal lobule, and superior temporal gyrus than the uncrossed-hand condition. These results suggest that activity in the frontal-parietal regions is related to two functions: (1) representation of the visual stimulus-motor response spatial configuration in an SSRC task, and (2) integration between external visual and internal proprioceptive sensory information. The activation in the superior temporal gyrus was not affected by the visual stimulus-motor response spatial configuration in an SSRC task; rather, it was affected by the crossed-hand posture. Thus, it seems to be related to representing internal proprioceptive sensory information necessary to carry out motor actions.
SPRINGER, 2004年09月, EXPERIMENTAL BRAIN RESEARCH, 158 (1), 9 - 17, 英語In human spatial recognition, right and left are not recognized symmetrically. Although there have been many studies on the hemispheric asymmetry of the human brain, asymmetries in high-level recognition (such as independence from input or output hemisphere) have not been studied extensively. We found that the human brain recognizes right and left asymmetrically in high-level recognition. Experiments were performed in which participants crossed their hands and were required to judge the side of a tactile stimulus on the index finger in two different contexts: 'which hand was touched' or 'on which side of the space the touched hand was located'. The right inferior frontal region was significantly more activated by the 'contextually defined right' stimulus (right-hand stimulation in the 'which hand' context and right-space stimulation in the 'which space' context) than by the 'contextually defined left' stimulus. However, no activation that was more activated by the 'contextually defined left' than by the 'contextually defined right' was found. This asymmetric activation suggests that 'right' is the more outstanding side for human spatial recognition.
BLACKWELL PUBLISHING LTD, 2004年03月, The European journal of neuroscience, 19 (5), 1425 - 9, 英語, 国際誌Spatially-directed behavior from perception to action is coded in supra-modal coordinate systems that integrate visual, tactile and proprioceptive information. To study the interactions between the visual and proprioceptive information on the estimation of the subjective sagittal midpoint, we performed a straight ahead pointing task, and compared to "with" and "without" visual information. Subjects stood or seated in front of a large sheet of paper (0.8 m × 1.1 m) that was recorded pointing locations. They had a leaser-pointer with left or right hand, and they were required to make free pointing towards the subjective midpoint. The distances between the locations where subjects pointed and the objective midpoint were measured. The pointing distance varied 3 m, 2 m (far-space) and 0.5 m (near-space). Without vision, when the pointing distance was far-space, the results showed that the subjective midpoints deviated to the left of the objective midpoint. No deviation was observed in near-space. The mean leftward deviation in far-space was 2.8 degrees (S.D. = 0.81) in visual angle. On the other hand, with vision, subjective midpoint was consistent with objective midpoint. The results showed two dissociations. One is that the body-center referred to the spatial encoding system with proprioception is dissociated from the center of the visual field. Second, our results conclude the dissociation between the near and far space. Previous studies discussed that there might be one set of spatial maps specialized for near and another for far-space in the human brain, our results are consistent with the hypothesis.
2003年, Journal of Vision, 3 (9), 563, 英語Spatially-directed behavior from perception to action is coded in supra-modal coordinate systems that integrate visual, tactile and proprioceptive information. To study the interactions between the visual and proprioceptive information on the estimation of the subjective sagittal midpoint, we performed a straight ahead pointing task, and compared to "with" and "without" visual information. Subjects stood or seated in front of a large sheet of paper (0.8 m × 1.1 m) that was recorded pointing locations. They had a leaser-pointer with left or right hand, and they were required to make free pointing towards the subjective midpoint. The distances between the locations where subjects pointed and the objective midpoint were measured. The pointing distance varied 3 m, 2 m (far-space) and 0.5 m (near-space). Without vision, when the pointing distance was far-space, the results showed that the subjective midpoints deviated to the left of the objective midpoint. No deviation was observed in near-space. The mean leftward deviation in far-space was 2.8 degrees (S.D. = 0.81) in visual angle. On the other hand, with vision, subjective midpoint was consistent with objective midpoint. The results showed two dissociations. One is that the body-center referred to the spatial encoding system with proprioception is dissociated from the center of the visual field. Second, our results conclude the dissociation between the near and far space. Previous studies discussed that there might be one set of spatial maps specialized for near and another for far-space in the human brain, our results are consistent with the hypothesis.
2003年, Journal of Vision, 3 (9), 563, 英語研究発表ペーパー・要旨(国際会議)
To investigate the process of crossmodal spatial recognition, we examined the effect of posture change on the recognition of a tactile stimulus position. The task was to judge whether a visual and a tactile stimulus, presented to the left or right, were on the same or different sides while subjects crossed or uncrossed their hands. Under a condition which removed the effect of response bias to the left and right, the dorsal visual cortex (area 18/19) and the precuneus were more activated in the crossed hands condition. The dorsal visual cortex activation suggests that the activity of brain areas classically considered to be visual cortex is affected by posture change, and reflects the reciprocal process across different modalities in spatial recognition.
LIPPINCOTT WILLIAMS & WILKINS, 2002年10月07日, Neuroreport, 13 (14), 1797 - 800, 英語, 国際誌To investigate the process of crossmodal spatial recognition, we examined the effect of posture change on the recognition of a tactile stimulus position. The task was to judge whether a visual and a tactile stimulus, presented to the left or right, were on the same or different sides while subjects crossed or uncrossed their hands. Under a condition which removed the effect of response bias to the left and right, the dorsal visual cortex (area 18/19) and the precuneus were more activated in the crossed hands condition. The dorsal visual cortex activation suggests that the activity of brain areas classically considered to be visual cortex is affected by posture change, and reflects the reciprocal process across different modalities in spatial recognition.
LIPPINCOTT WILLIAMS & WILKINS, 2002年10月, NEUROREPORT, 13 (14), 1797 - 1800, 英語研究発表ペーパー・要旨(国際会議)
研究発表ペーパー・要旨(国際会議)
Although direction selectivity is a cardinal property of neurons in the visual motion detection system, movement of numerous elements without global direction (incoherent motion) has been shown to activate human and monkey visual systems, as does coherent motion which has global direction. We used magnetoencephalography to investigate the neural process underlying responses to these types of motions in the human extrastriate cortex. Both motions were created using a random dot kinematogram and four speeds (0, 0.6, 9.6 and 25°/s). The visual stimuli were composed of two successive motions at different speeds a coherent motion at a certain speed that changed to incoherent motion at another speed or vice versa. Magnetic responses to the change in motion consisted of a few components, the first of which was always largest. The peak latency of the first component was inversely related to the speed of the preceding motion, but for both motions it was not affected by the speed of the subsequent motion. For each subject, the estimated origin of the first component was always in the extrastriate cortex, and this changed with the speed of the preceding motion. For both motions, the location for the slower preceding motion was lateral to that for the faster preceding motion. Although the latency changes of the two motions differed, their overall response properties were markedly similar.These findings show that the speed of incoherent motion is represented in the human extrastriate cortex neurons to the same degree as coherent motion. We consider that the human visual system has a distinct neural mechanism to perceive random dots' motion even though they do not move in a specific direction as a whole. Copyright (C) 2000 IBRO.
2000年04月, Neuroscience, 97 (1), 1 - 10, 英語We examined the quantitative and qualitative difference of the pattern with visuo-cognitive processing impairment in patients with early onset AD (EOAD) and late onset AD (LOAD). We use a visual attention task introduced by Navon (1977) to examine the function to integrate local visual stimuli into global image. Although the ability to identify solid digits ether of large and small size presented at the same exposure duration, EOAD performance was poor in the global perception especially at the short duration (20 msec). We provide evidence that this dysfunction is attribute to the AD pathology specific to early onset type.
IOS PRESS, 2000年, BEHAVIOURAL NEUROLOGY, 12 (3), 119 - 125, 英語We examined the quantitative and qualitative difference of the pattern with visuo-cognitive processing impairment in patients with early onset AD (EOAD) and late onset AD (LOAD). We use a visual attention task introduced by Navon (1977) to examine the function to integrate local visual stimuli into global image. Although the ability to identify solid digits ether of large and small size presented at the same exposure duration, EOAD performance was poor in the global perception especially at the short duration (20 msec). We provide evidence that this dysfunction is attribute to the AD pathology specific to early onset type.
IOS PRESS, 2000年, BEHAVIOURAL NEUROLOGY, 12 (3), 119 - 125, 英語