Video is two dimensional, and because the relative size of the signer is smaller than in real life, a more central fixation point may be a better strategy when viewing sign language within a smaller field. However, watching a signer on videotape may be quite different from watching a live signer as an addressee. In one of the video clips, participants tended to fixate on the upper body of the signer, rather than on the face, and Muir and Richardson (2005) hypothesized that the wider and more rapid movements produced by the signer may have caused gaze to fall on the upper body to permit a range of movements to be processed, while keeping the lower part of the face in foveal (high-resolution) vision. Examination of hand movements in the videos suggested that the following factors caused shifts in gaze away from the face and toward the signer's hands or body: (a) signs close to the face (gaze is drawn to the hands), (b) “expansive” signs in the lower body region, and (c) movement of the signer within the video scene. Participants in this study fixated on the signer's face between 61% and 99% of the time across the three video clips. Deaf signers viewed three short video clips of signed stories that were selected to include a wide range of fine and gross motor movements. Previously, Muir and Richardson (2005) used eye tracking to explore the gaze patterns of deaf users of British Sign Language (BSL) while they watched videotaped signing. However, there is very little evidence regarding precisely where addressees look when processing sign language and whether there are specific changes of fixation with respect to the signer's face and hands at particular points during language comprehension. Examinations of videotaped signed interactions, as well as introspective data from native signers, suggest that addressees maintain a relatively steady gaze toward the person signing ( Baker & Padden, 1978 Siple, 1978). Just as readers fixate longer and backtrack over regions of difficult text, it is possible that sign perceivers shift fixation toward the hands when comprehending complex linguistic structures that are conveyed by the manual articulators. We therefore hypothesized that patterns of gaze fixation and movement might provide a measure of processing difficulty for sign language comprehension. For example, increased fixation times and regressive (backtracking) eye movements typically indicate that a reader is having difficulty with a particular region of text (for review, see Staub & Rayner, 2007). During silent reading, gaze fixation patterns are frequently used as a measure of local processing difficulty for words or phrases. For example, when viewing a visual scene, eye movements are closely time locked to object information presented in a spoken utterance (e.g., Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). We used eye-tracking technology to determine whether eye gaze behavior during sign language comprehension is affected by information content, as has been found for eye movements during reading and in “visual world” experiments. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.īecause sign language is perceived visually, the eye movements and gaze position of an addressee allow us to make inferences about the uptake of linguistic information in real time. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. An eye-tracking experiment investigated where deaf native signers ( N = 9) and hearing beginning signers ( N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer.
0 Comments
Leave a Reply. |