Facial age processing follows a three-stage neurocognitive model—structural encoding, prototype matching, and affective evaluation—characterized by a dynamic shift from early global coordination to later localized processing, as evidenced by stage-specific ERP effects, oscillatory dynamics, and functional connectivity patterns.
Key Findings
Results
Older faces evoked larger N170 amplitudes compared to younger faces during the structural encoding stage.
ERP component-based analysis identified the N170 as a stage-specific marker of structural encoding of facial age.
The N170 effect was observed over occipital and temporo-occipital sensors.
Mass-univariate analysis confirmed a significant early time band of 70–168 ms over occipital and temporo-occipital sensors.
The oldest faces (70 years) showed the strongest differentiation relative to younger faces.
Results
Older faces produced reduced P2 responses, indexing the prototype matching stage of facial age processing.
The P2 component showed reduced amplitude for older faces, suggesting differential neural processing during prototype matching.
Mass-univariate analysis identified a corresponding significant time band of 228–286 ms.
This stage was characterized as involving localized processing without large-scale network engagement.
Only local theta activity (4–8 Hz) remained during the prototype matching stage (~200–300 ms).
Results
Older faces elicited enhanced late positive potentials (LPP), reflecting age-related affective evaluation in the late processing stage.
The LPP was identified as a marker of the affective evaluation stage occurring after 300 ms.
Mass-univariate analysis confirmed a significant late time band of 342–800 ms over occipital and temporo-occipital sensors.
LPP modulations were interpreted as reflecting age-related affective processing.
The oldest faces (70 years) showed the strongest differentiation from younger faces across all three identified time bands.
Results
Early facial age encoding (~100–200 ms) was accompanied by increased theta and alpha power along with widespread phase-based connectivity, indicating global neural coordination.
Time-frequency analysis revealed increased theta (4–8 Hz) and alpha (8–13 Hz) power during early encoding (~100–200 ms).
Widespread theta/alpha phase-based functional connectivity was observed during this early stage.
This global coordination pattern was interpreted as supporting initial age information extraction from faces.
The early stage corresponded to the structural encoding phase of the proposed three-stage model.
Results
The prototype matching stage (~200–300 ms) was characterized by only local theta activity without large-scale network engagement.
During prototype matching, widespread phase-based connectivity observed in the early stage was absent.
Only localized theta activity (4–8 Hz) persisted during the ~200–300 ms window.
This finding suggests a shift from globally coordinated to locally restricted neural processing between stages one and two.
The pattern indicates that prototype matching does not require large-scale network engagement.
Results
Facial age processing shows a dynamic shift from early global neural coordination to later localized processing.
The overall pattern across ERP, time-frequency, and functional connectivity analyses supported a progression from global to local processing.
Early global coordination (~100–200 ms) involved widespread theta/alpha phase-based connectivity.
The middle stage (~200–300 ms) showed localized theta activity only.
The late stage (>300 ms) was indexed by LPP modulations reflecting affective processing.
This dynamic shift provides a mechanistic account of how the brain extracts age information from faces.
Methods
EEG was recorded during age judgments of faces from four age groups spanning the lifespan.
Participants made age judgments of faces representing four age groups: 10, 30, 50, and 70 years.
Analyses combined event-related potentials (component-based and mass-univariate), time-frequency analysis, and functional connectivity measures.
Mass-univariate analysis (MUA) identified three significant time bands: 70–168 ms, 228–286 ms, and 342–800 ms.
Significant effects were localized over occipital and temporo-occipital sensors.
Xing W, Gao K, Luo Y, Han S. (2026). A three-stage neurocognitive model of facial age processing: Evidence from ERP, oscillatory dynamics, and functional connectivity.. NeuroImage. https://doi.org/10.1016/j.neuroimage.2026.121808