The Jouve-Cerebrals Word Similarities (JCWS) is a 150-item open-response verbal-reasoning test built around a distinctive item format that combines partial-letter cues with semantic and analogical reasoning. Each item presents a target word’s letters scrambled or partially given, plus a verbal clue: for example, “(A _ I K L N T) means nearly the same as SPEAKING” — the examinee infers that the answer is TALKING from both the letter constraints and the meaning relation. The format unifies vocabulary depth, orthographic processing, and inductive reasoning in a way that more conventional verbal tests do not, and it produces a Verbal Reasoning Index (VRI) on the standard IQ metric (M = 100, SD = 15) with reliability ξ = .981 in a 407-examinee development sample (Cogn-IQ, 2025). The JCWS is positioned by its manual as an active research instrument rather than a fully-validated clinical battery.
Three subtests, one composite
The JCWS contains 150 items distributed equally across three subtests. Each subtest uses the same letter-cue format but emphasizes a different cognitive operation:
- Nearly The Same (NTS, 50 items, ξ = .952). The examinee produces a word that shares meaning with a verbal clue, given partial letters of the target. Example format: “(A _ I K L N T) means nearly the same as SPEAKING.” This is the closest to a traditional vocabulary task, but the letter constraint adds an orthographic-retrieval component that pure synonym tests do not require.
- Is To As (ITA, 50 items, ξ = .958). The examinee completes a four-term verbal analogy with the same letter-cue scaffold. Example format: “(A _ I K L N T) is to SPEAKING as WALKING is to MOVING.” The format taps verbal concept formation (Gc/VL) plus inductive reasoning (Gf/I) — both required to identify the relevant relation and to retrieve the target word.
- Which Relates To (WRT, 50 items, ξ = .945). The examinee identifies a word that fits a logical progression. Example format: “(E N O) relates to TWO, which relates to THREE, which relates to (F _ _ U).” This subtest emphasizes general sequential reasoning (Gf/RG) on top of the lexical and orthographic demands shared with NTS and ITA.
The three subtest scaled scores (M = 10, SD = 3) combine into the Verbal Reasoning Index (VRI) through a Modified Tellegen-Briggs Formula 4 with cubic correction, which addresses tail-compression bias and stabilizes scores at the distribution extremes. The VRI is reported with a 95% confidence interval of approximately ±4 points (SEM ≈ 2.07). Reliable change between two administrations requires roughly ≥ 6 VRI points (Cogn-IQ, 2025).
Reliability via JRRE (Jouve’s Xi)
Reliability for the JCWS is computed using Jouve’s Randomized Reliability Estimation (JRRE; Jouve, 2025), also referred to as Jouve’s Xi (ξ). Where Cronbach’s alpha computes a single internal-consistency value from the observed item covariances and assumes essentially tau-equivalent items, JRRE estimates reliability through repeated randomized split-half resampling: items are randomly partitioned into halves, the half-scores are correlated, the correlation is Spearman-Brown adjusted, and the procedure is averaged across many permutations. The output is a reliability distribution with confidence intervals rather than a single point estimate, robust to any particular split or item ordering. Jouve (2025) reports a comprehensive simulation study comparing JRRE against Cronbach’s α, KR-20, McDonald’s ωt, eigenvalue-based lower bounds, and Guttman’s λ coefficients across Rasch, 2PL, 3PL, 4PL, GRM, GPCM, and NRM models, documenting the conditions under which classical coefficients underestimate or distort reliability and JRRE provides more accurate estimates.
For the JCWS development sample (N = 407, 150 items), JRRE yields:
- Total scale (VRI): ξ = .981 (95% CI [.972, .986])
- NTS: ξ = .952 (1,774 iterations)
- ITA: ξ = .958 (2,607 iterations)
- WRT: ξ = .945 (5,552 iterations)
These values are high in absolute terms and consistent with what would be expected for a long, content-homogeneous verbal-reasoning instrument. The negative skew in the JRRE distribution (typical at high reliability) reflects the ceiling compression that occurs as reliability approaches 1.0; the manual emphasizes median and percentile bands of the distribution rather than mean-only point estimates.
Subtest intercorrelations and what they imply
Within-test subtest correlations in the development sample (N = 407) range from .746 to .868, with an average intercorrelation of .728. This is high — substantially higher than typical between-subtest correlations in multi-construct cognitive batteries — and it reflects the subtests sharing the same item-format scaffold and the same underlying verbal-reasoning factor. The intercorrelation pattern supports computing a single VRI composite while retaining subtest-level scores for profile description.
Factor structure: a strongly unidimensional general factor
A factor analysis (N = 28) examined the JCWS subtests jointly with the JCCES subtests (Verbal Analogies, Math Problems, General Knowledge) to characterize the JCWS’s structure within a broader Gc nomological network. Factorability indices supported the analysis: Kaiser-Meyer-Olkin sampling adequacy = .822 (“very good”), Bartlett’s test of sphericity χ² = 170.75 (p < .001), matrix determinant 0.0009 (no multicollinearity), average intercorrelation .728 (Cogn-IQ, 2025).
The general-factor evidence is striking:
- Explained Common Variance (ECV) = 92.3% — of the variance shared across all six indicators (three JCWS subtests, three JCCES subtests), 92.3% loads on the general factor.
- Omega Hierarchical (ωh) = 1.000 — indicating a unidimensional structure where the general factor accounts for essentially all reliable variance.
- First eigenvalue = 4.65 (77.5% of total variance), with an eigenvalue ratio of 7.05:1 between the first and second components, well above conventional thresholds for unidimensionality.
- g-loadings for JCWS subtests: NTS .939, WRT .923, ITA .899; for JCCES indicators: VA .920, GK .856, MP .813. All six load strongly on the general factor.
The CHC mapping in the manual labels the JCWS subtests as primarily Gc/VL (lexical knowledge) for NTS, with VL/LD plus secondary I (induction) or RG (general sequential reasoning) for WRT and ITA. These narrow-ability labels describe content emphasis rather than separate factors in the analysis — the empirical structure is dominated by g.
Convergent validity
The JCWS technical manual reports convergent-validity correlations against four criterion measures (Cogn-IQ, 2025). Both observed and disattenuated coefficients are reported, where disattenuation uses the Spearman correction with reliability inputs of ξ = .981 for the JCWS VRI and the published reliability estimates for each criterion.
VRI convergent validity:
- WAIS-IV Verbal Comprehension Index: r = .86 (N = 15); disattenuated r’ = .89
- IAW Vocabulary Proficiency Index: r = .80 (N = 57); disattenuated r’ = .83
- JCCES Cognitive Acumen Index (CAI): r = .66 (N = 28); disattenuated r’ = .68
- JCCES Verbal Acumen Index (VAI): r = .71 (N = 28); disattenuated r’ = .74
- JCCES Verbal Analogies (VA) subtest: r = .83 (N = 28); disattenuated r’ = .91
- JCCES Math Problems (MP) subtest: r = .51 (N = 28); disattenuated r’ = .54
- JCCES General Knowledge (GK) subtest: r = .66 (N = 28); disattenuated r’ = .69
The pattern is informative for what it tells us about the JCWS’s construct. The strongest correlation is with the JCCES Verbal Analogies subtest (.83 observed, .91 disattenuated) — a subtest that uses an analogous A:B::C:D format without the letter-cue scaffold. This indicates that the JCWS’s letter-cue items, despite their distinctive surface format, are tapping the same verbal-analogical-reasoning construct as more conventional analogy tests. The .80 correlation with the IAW VPI (a pure vocabulary measure) places the JCWS in the same Gc neighborhood as established vocabulary measures. The lower correlation with JCCES MP (.51) — the math-problems subtest — is the appropriate differential-validity pattern: the JCWS does not measure quantitative ability.
The .86 correlation with WAIS-IV VCI is high but based on a small sample (N = 15). The manual flags this coefficient as provisional, noting that “replication with larger Ns is in progress; treat those specific coefficients as provisional.” This is appropriate transparency about a research-instrument validity coefficient that has not yet been independently replicated.
The letter-cue item format and what it adds
The JCWS’s signature design feature — partial letters of the target word presented alongside the semantic clue — is unusual among standardized verbal tests. Most vocabulary and verbal-analogy measures present either pure semantic content (definitions, analogy stems with no orthographic information) or pure orthographic content (letter puzzles with no semantic anchor). The JCWS combines both, which has measurement implications worth understanding.
The orthographic constraint serves two functions. First, it sharply narrows the response space. Without the letter cue, a synonym task admits hundreds of plausible responses to a typical clue; with it, the response space typically narrows to a small handful of words that satisfy both the meaning and the letter pattern. This makes automated scoring tractable while preserving the production-rather-than-recognition format. Second, the orthographic constraint adds a retrieval-from-partial-cue component that tests have shown loads on Glr (long-term retrieval) over and above the pure Gc loading of definitional vocabulary tasks.
The trade-off is in construct purity. A pure Gc/VL measure isolates lexical knowledge; the JCWS combines lexical knowledge with orthographic-retrieval ability and (in WRT) sequential reasoning. The factor analysis indicates these are well-integrated within a single general factor in this sample, but users should understand that the JCWS is not equivalent to a pure vocabulary measure such as the IAW VPI — the formats overlap substantially (r = .80 disattenuated to .83) but are not interchangeable.
Practical implications
For researchers and clinicians considering the JCWS:
- Use the JCWS when extended-ceiling verbal reasoning is the construct of interest. The 150-item bank with three subtests provides measurement at higher ability levels than typical clinical batteries reach, useful for research with high-ability samples and for situations where standard tests reach their ceiling.
- Recognize the research-instrument framing. The technical manual explicitly positions the JCWS as developing psychometric evidence, particularly for criterion correlations with the WAIS where current N = 15. High-stakes clinical applications should rely on instruments with more accumulated independent-replication evidence.
- Account for the linguistic and orthographic dependencies. Cormier et al. (2022) showed that examinee characteristics — particularly linguistic background and exposure to the test language — affect cognitive test performance more strongly than test characteristics in many contexts. The JCWS’s letter-cue format adds an orthographic-language-specific component that compounds this consideration. Cross-linguistic and cross-cultural use requires either norm reference within the target population or careful interpretive caveats.
- Pair with measures of other broad abilities. The factor analysis indicates the JCWS is essentially unidimensional within its content space. A complete cognitive profile requires complementary measures of fluid reasoning, processing speed, working memory, and other broad CHC abilities.
- Plan for the 150-item length. The instrument is thorough but long; the manual recommends scheduling breaks and monitoring effort, and noting that the 95% confidence interval of approximately ±4 VRI points reflects the high reliability of the long form.
Open research directions
Independent replication of the convergent-validity correlations — particularly the WAIS VCI association where the current N = 15 is small — is the most important next step in the JCWS evidence base. Differential item functioning analyses across linguistic, educational, and demographic subgroups would clarify the bounds of fair use. Test-retest reliability across longer intervals would complement the strong internal-consistency evidence already documented. The manual is transparent about these directions and frames the JCWS appropriately as an instrument under continued psychometric development rather than a finalized clinical battery.
The takeaway
The JCWS is a 150-item open-response verbal-reasoning test with a distinctive letter-cue item format, three subtests (Nearly The Same, Is To As, Which Relates To), and a composite Verbal Reasoning Index on the IQ metric. JRRE reliability of ξ = .981 across N = 407 and a unidimensional factor structure (ECV = 92.3%, ωh = 1.000, g-loadings .899–.939) support a strong general-verbal-reasoning interpretation. Convergent-validity correlations span r = .80–.86 with established verbal measures (IAW VPI, WAIS VCI), with the strongest disattenuated correlation (.91) against the JCCES Verbal Analogies subtest. The instrument is appropriately positioned by its technical manual as an active research instrument with continuing psychometric development.
References
- Cogn-IQ. (2025). JCWS Technical Manual. Cogn-IQ. https://www.cogn-iq.org/methods/jcws-manual/
- Jouve, X. (2023). Psychometric properties of the Jouve Cerebrals Word Similarities test: An evaluation of vocabulary and verbal reasoning abilities. Cogn-IQ Research Papers. https://pubscience.org/ps-1mSQT-dafbe3-F9Jw
- Jouve, X. (2025). When alpha fails: Jouve’s Randomized Reliability Estimation (ξ) versus classical reliability coefficients in Rasch, 2PL, 3PL, 4PL, GRM, GPCM, and NRM models. Cogn-IQ Research Papers. https://pubscience.org/ps-1mYdi-014f7f-ormI
- Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press.
- Schneider, W. J., & McGrew, K. S. (2018). The Cattell–Horn–Carroll theory of cognitive abilities. In D. P. Flanagan & E. M. McDonough (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (4th ed., pp. 73–163). Guilford Press.
- Cormier, D. C., Bulut, O., McGrew, K. S., & Kennedy, K. (2022). Linguistic influences on cognitive test performance: Examinee characteristics are more important than test characteristics. Journal of Intelligence, 10(1), 8. https://doi.org/10.3390/jintelligence10010008
Related Research
The G Factor: What General Intelligence Means
The g factor — Charles Spearman's name for the common variance that runs through all cognitive tests — is the most replicated and the most…
Apr 10, 2026What Is Mensa? Membership and Testing
Mensa. The name conjures images of genius-level intellects gathering to solve the world's hardest puzzles. In reality, the world's largest and oldest high-IQ society is…
Mar 25, 2026IQ Test Anxiety: How Stress Affects Your Score
You sit down for an IQ assessment. Your palms are sweating, your mind races, and the moment you see the first timed task, your thoughts…
Mar 22, 2026Raven's Progressive Matrices: Culture-Fair IQ Test
Among the hundreds of cognitive tests developed over the past century, few have achieved the global reach of Raven's Progressive Matrices. Administered in settings from…
Mar 19, 2026How to Interpret IQ Test Results
You've received an IQ test report — for yourself, your child, or a client — and what should be a clean answer is a thicket…
Mar 15, 2026People Also Ask
What is psychometrics: the science of psychological measurement?
The discipline of psychometrics emerged from two distinct yet complementary intellectual traditions. The first, championed by figures such as Charles Darwin, Francis Galton, and James McKeen Cattell, emphasized the study of individual differences and sought to develop systematic methods for their quantification. The second, rooted in the psychophysical research of Johann Friedrich Herbart, Ernst Heinrich Weber, Gustav Fechner, and Wilhelm Wundt, laid the foundation for the empirical investigation of human perception, cognition, and consciousness. Together, these two traditions converged to form the scientific underpinnings of modern psychological measurement.
Read more →What is group-theoretical symmetries in item response theory (irt)?
Item Response Theory (IRT) is a widely adopted framework in psychological and educational assessments, used to model the relationship between latent traits and observed responses. This recent work introduces an innovative approach that incorporates group-theoretic symmetry constraints, offering a refined methodology for estimating IRT parameters with greater precision and efficiency.
Read more →What are decoding high intelligence: interdisciplinary insights?
Research into high intelligence provides valuable insights into human cognitive abilities and their impact on individual and societal progress. By exploring the historical development of intelligence studies, the challenges of measuring exceptional cognitive abilities, and recent advancements in neuroscience and psychometrics, this article highlights the ongoing importance of understanding high-IQ individuals.
Read more →What are the complex journey of the wais: insights and transformations?
The Wechsler Adult Intelligence Scale (WAIS), developed in 1955 by David Wechsler, introduced a broader and more dynamic approach to assessing cognitive abilities. Over the years, it has been refined through several editions, becoming one of the most widely used tools in psychological and neurocognitive evaluations. This post reviews its historical development, structure, and contributions to cognitive science.
Read more →Why is background important?
The JCWS test builds on the foundation established by the Word Similarities subtest from the Cerebrals Contest, a well-regarded measure of verbal-crystallized intelligence. Its design incorporates elements that align closely with other established tests, such as the Wechsler Adult Intelligence Scale (WAIS), and aims to measure verbal aptitude with a high degree of accuracy.
How does key insights work in practice?
High Reliability: The JCWS demonstrates exceptional reliability, with a Cronbach’s alpha of .96 for the Word Similarities subtest. The full set of subtests achieves a split-half coefficient of .98 and a Spearman-Brown prophecy coefficient of .99, indicating consistent performance across its components. Strong Correlations with WAIS: The Word Similarities subtest shows
Jouve, X. (2023, April 7). JCWS: A Verbal Abilities Test. PsychoLogic. https://www.psychologic.online/jcws-verbal-abilities/

