Determining the optimal number of factors to retain in exploratory factor analysis (EFA) has long been a subject of debate in social sciences research. Finch (2020) addresses this challenge by comparing the performance of fit index difference values and parallel analysis, a well-established method in this field. The study offers valuable insights into how these approaches perform under varying conditions, particularly with categorical and normally distributed indicators.
Background
Exploratory factor analysis is widely used to identify underlying structures in datasets. However, selecting the correct number of factors to retain has proven complex, as no single method consistently outperforms others across all scenarios. Fit indices and parallel analysis are frequently used techniques, but their effectiveness varies depending on data characteristics such as distribution and factor loadings. Finch’s research investigates these differences through a simulation-based study.
Key Insights
Parallel Analysis Limitations: While parallel analysis remains a trusted method, its performance was less reliable in the scenarios tested, particularly with smaller factor loadings.
- Performance of Fit Index Difference Values: Finch found that fit index difference values were more effective than parallel analysis for categorical indicators and for normally distributed indicators when factor loadings were low.
- Parallel Analysis Limitations: While parallel analysis remains a trusted method, its performance was less reliable in the scenarios tested, particularly with smaller factor loadings.
- Practical Applications: The results suggest that fit index difference values may serve as a strong alternative, especially in studies with categorical data or where factor loadings are minimal.
Significance
This study provides researchers with a nuanced understanding of statistical tools for EFA. By highlighting the conditions under which fit index difference values outperform parallel analysis, Finch’s findings help refine methodological choices in social sciences research. Improved factor retention decisions can lead to more accurate interpretations of data, ultimately enhancing the quality and validity of findings.
Future Directions
Further research could expand on Finch’s work by exploring how fit index difference values perform across more diverse datasets and varying levels of factor complexity. Additionally, developing guidelines for when to prioritize this approach over parallel analysis could improve its practical application in research settings.
Conclusion
Finch’s study offers valuable contributions to the ongoing discussion about factor retention in exploratory factor analysis. By demonstrating the strengths of fit index difference values under specific conditions, the research supports more informed decision-making in statistical analyses. This work underscores the importance of tailoring methodological choices to the unique characteristics of each dataset.
Reference
Finch, W. H. (2020). Using Fit Statistic Differences to Determine the Optimal Number of Factors to Retain in an Exploratory Factor Analysis. Educational and Psychological Measurement, 80(2), 217-241. https://doi.org/10.1177/0013164419865769
Related Research
The G Factor: What General Intelligence Really Means
In 1904, Charles Spearman noticed something that would reshape the study of intelligence for the next century: children who scored well on one type of…
Apr 10, 2026How to Interpret IQ Test Results: A Psychometrician's Guide
You've received an IQ test report — perhaps for yourself, your child, or a client. It's filled with numbers, percentiles, confidence intervals, and subtest scores.…
Mar 15, 2026WAIS-IV vs. WAIS-V: What Changed and Why It Matters for Your IQ Score
The Wechsler Adult Intelligence Scale is the most widely used IQ test in the world. When Pearson released the WAIS-V in 2024 — the first…
Aug 7, 2025Validity of WISC-V Profiles of Strengths and Weaknesses
When a child's WISC-V shows uneven index scores — say, a Verbal Comprehension Index of 115 and a Working Memory Index of 95 — clinicians,…
Jan 15, 2023Examining the Effect of Estimation Methods on SEM Fit Indices
The study by Shi and Maydeu-Olivares (2020) analyzes how different estimation methods influence key fit indices in Structural Equation Modeling (SEM). By focusing on methods…
Jun 2, 2020People Also Ask
What are validity of wisc-v profiles of strengths and weaknesses?
The Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) has been widely used to assess cognitive abilities in children. This article by Peter F. de Jong evaluates the validity of interpreting WISC-V profiles of strengths and weaknesses, which are often derived from differences between overall scores and index scores.
Read more →What is the effect of estimation methods on sem fit indices?
The study by Shi and Maydeu-Olivares (2020) analyzes how different estimation methods influence key fit indices in Structural Equation Modeling (SEM). By focusing on methods such as Maximum Likelihood (ML), Unweighted Least Squares (ULS), and Diagonally Weighted Least Squares (DWLS), the authors explore the nuances of model fit across various types of misspecifications.
Read more →Dissecting Cognition: Spatial vs. Abstract Reasoning?
Summary. An analysis of performance on the Jouve–Cerebrals Test of Induction (JCTI) and four GAMA subtests (Matching, Analogies, Sequences, Construction) points to a single dominant source of individual differences rather than two separate abilities. With N = 118, factor-analytic evidence favors a general reasoning factor that subsumes both spatial–temporal and abstract problem-solving demands. Any apparent “two-factor” pattern is better explained by task-specific variance and sampling noise than by distinct latent abilities.
Read more →What are differentiating cognitive abilities: a factor analysis of jcces and gama subtests?
This study aimed to investigate the differentiation between cognitive abilities assessed by the Jouve Cerebrals Crystallized Educational Scale (JCCES) and General Ability Measure for Adults (GAMA). A sample of 63 participants completed both JCCES and GAMA subtests. Pearson correlation and factor analysis were used to analyze the data. The results revealed significant positive correlations between most of the JCCES subtests, while correlations between GAMA and JCCES subtests were generally lower. Factor analysis extracted two distinct factors, with JCCES subtests loading on one factor and GAMA subtests loading on the other. The findings supported the hypothesis that JCCES and GAMA measure distinct cognitive abilities, with JCCES assessing crystallized abilities and GAMA evaluating nonverbal and figurative aspects of general cognitive abilities. This differentiation has important implications for the interpretation of JCCES and GAMA scores and their application in educational, clinical, and research settings.
Read more →Why is background important?
Exploratory factor analysis is widely used to identify underlying structures in datasets. However, selecting the correct number of factors to retain has proven complex, as no single method consistently outperforms others across all scenarios. Fit indices and parallel analysis are frequently used techniques, but their effectiveness varies depending on data characteristics such as distribution and factor loadings. Finch’s research investigates these differences through a simulation-based study.
How does key insights work in practice?
Performance of Fit Index Difference Values: Finch found that fit index difference values were more effective than parallel analysis for categorical indicators and for normally distributed indicators when factor loadings were low. Parallel Analysis Limitations: While parallel analysis remains a trusted method, its performance was less reliable in the scenarios tested,
Jouve, X. (2020, August 6). Evaluating Factor Retention in Exploratory Factor Analysis. PsychoLogic. https://www.psychologic.online/2020/08/06/fit-index-differences-factor-analysis/

