USING AUTOENCODERS TO MODEL ASYMMETRIC CATEGORY LEARNING IN EARLY INFANCY: INSIGHTS FROM PRINCIPAL COMPONENTS ANALYSIS
Young infants exhibit intriguing asymmetries in the exclusivity of categories formed on the basis of visually presented stimuli. For instance, infants who have previously seen a series of cats show a surge of interest when looking at dogs, this being interpreted as dogs being perceived as novel. On the other hand, infants previously exposed to dogs do not exhibit such an increased interest for cats. Recently, researchers have used simple autoencoders to account for these effects. Their hypothesis was that the asymmetry effect is caused by the smaller variances of cats' features and an inclusion of the values of the cats' features in the range of dogs' values. They predicted, and obtained, a reversal of asymmetry by reversing dog-cat variances, thereby inversing the inclusion relationship (i.e. dogs are now included in the category of cats). This reversal reinforces their hypothesis. We will examine the explanatory power of this model by investigating in greater detail the ways by which autoencoders exhibit such an asymmetry effect. We analyze the predictions made by a linear Principal Components Analysis. We examine the autoencoder's hidden-unit activation levels and, finally, we emphasize various factors that affect generalization capacities and may play key roles in the observed asymmetry effect.