Saliency in Context: The Effect of Context on the Diagnosticity of Facial Features
How might faces we have learned be represented in our memory? Researchers believe that our memory for faces is based on building a robust averaged representation comprised of the stable aspects of the face (i.e., eyes, nose, mouth). However, anecdotal evidence suggests this one size fits all approach to face representations may not be correct. A new theory suggests our representation for faces is instead based on a dynamic weighting, wherein what is seen as most diagnostic during learning will be encoded to a greater extent than other features in the face. One factor that may be especially important for a weighted representation is the context in which a face is initially viewed. Dependent on the context of learning, certain features may appear more distinctive than others and therefore be deemed diagnostic and receive representational weight. The current study had participants learn four faces with one manipulated to appear distinctive in the experimental context by having a unique hair colour (Experiment 1), or eye colour (Experiment 2) compared to the other faces. Participants then completed a recognition task where the feature of interest (i.e., hair or eye colour) was either available or unavailable (i.e., bald and eye closed conditions) for recognition. Findings suggested recognition was disrupted when the diagnostic feature was unavailable compared to when that feature was available, across both distinctive and typical faces. Interestingly, Experiment 2 showed a distinctiveness performance advantage compared to Experiment 1, most likely because neighbouring features may be more diagnostic than others during recognition. In addition, further exploratory analysis showed the order of the test could further affect what was encoded.