MODEL SELECTION FOR EYE MOVEMENTS: ASSESSING THE ROLE OF ATTENTIONAL CUES IN INFANT LEARNING
A recent study1 showed that different attention cues (social and non-social) produce qualitatively different learning effects. The mechanisms underlying such differences, however, were unclear. Here, we present a novel computational model of audio-visual learning combining two competing processes: habituation and association. The model's parameters were trained to best reproduce each infant's individual looking behavior from trial-to-trial in training and testing. We then isolated each infant's learning function to explain the variance found in preferential looking tests. The model allowed us to rigorously examine the relationship between the infants' looking behavior and their learning mechanisms. By condition, the model revealed that 8-month-olds learned faster from the social (i.e. face) than the non-social cue (i.e., flashing squares), as evidenced by the parameters of their learning functions. In general, the 4-month-olds learned more slowly than the 8-month-olds. The parameters for attention to the cue revealed that infants at both ages who weighted the social cue highly learned quickly. With non-social cues, 8-month-olds were impaired in learning, as the cue competed for attention with the target visual event Using explicit models to link looking and learning, we can draw firm conclusions about infants' cognitive development from eye-movement behavior.