Cognitive Abilities and Intelligence

Dissecting Cognition: Spatial vs. Abstract Reasoning?

Dissecting Cognition: Spatial vs. Abstract Reasoning

Summary. An analysis of performance on the Jouve–Cerebrals Test of Induction (JCTI) and four GAMA subtests (Matching, Analogies, Sequences, Construction) points to a single dominant source of individual differences rather than two separate abilities. With N = 118, factor-analytic evidence favors a general reasoning factor that subsumes both spatial–temporal and abstract problem-solving demands. Any apparent “two-factor” pattern is better explained by task-specific variance and sampling noise than by distinct latent abilities.

Background

The study set out to examine how nonverbal tasks cohere psychometrically. Although it is intuitive to split reasoning into spatial–temporal manipulation versus abstract relation finding, the question is empirical: do the data support multiple dimensions once the shared variance among tests is modeled?

Key Insights

  • One general factor dominates. Principal-axis factoring with parallel analysis retained a single factor that explained roughly 42% of observed variance. Loadings were substantial across tasks (e.g., Construction > JCTI > Analogies ≈ Sequences > Matching), indicating broad overlap in what these measures capture.
  • “Two factors” do not hold up. Forcing a two-factor solution produced weaker fit, near-zero correlations between the putative factors, and notable cross-loadings (e.g., Analogies loading on both). This pattern signals an unstable partition rather than meaningful separable constructs.
  • Task flavor ≠ distinct ability. Construction shows the highest saturation with the general factor, likely because spatial visualization is highly g-loaded. Matching shows the lowest loading but still aligns with the same latent dimension. Differences across subtests reflect measurement emphasis, not separate abilities.

Significance

For practitioners, the safest interpretation is that these tasks index a common nonverbal reasoning capacity. Total scores are therefore more defensible than fine-grained “profiles” carved from a small set of indicators. In educational and clinical contexts, this supports using a compact battery like JCTI + selected GAMA subtests to obtain a stable index of general reasoning without overinterpreting subtest scatter.

Future Directions

  • Replicate with larger, demographically diverse samples and add multiple indicators per hypothesized facet (≥3 per factor) to test whether reliable subdimensions emerge when the battery is expanded.
  • Use confirmatory models (bifactor, correlated factors) and report omega-hierarchical and explained common variance to quantify general-factor saturation.
  • Compare predictive utility of total versus putative subscale scores for outcomes such as STEM coursework, technical training performance, and complex problem-solving tasks.

Conclusion

The data do not warrant splitting performance into spatial–temporal and abstract reasoning factors. A single, robust general reasoning factor accounts for the common variance across JCTI and GAMA tasks; any residual differences look task-specific rather than factorially distinct.

Reference

Jouve, X. (2018). Exploring underlying factors in cognitive tests: Spatial-temporal reasoning and abstract reasoning abilities. Cogn-IQ Research Papers. https://pubscience.org/ps-1mFWV-3f180b-jGlP

Leave a Reply