Showing 3 Result(s)
Assessing Missing Data Handling Methods in Sparse Educational Datasets
Statistical Methods and Data Analysis

Assessing Missing Data Handling Methods in Sparse Educational Datasets

The study by Xiao and Bulut (2020) evaluates how different methods for handling missing data perform when estimating ability parameters from sparse datasets. Using two Monte Carlo simulations, the research highlights the strengths and limitations of four approaches, providing valuable insights for researchers and practitioners in educational and psychological measurement. …

Improving Detection of Noncredible Results with Nonmemory-Based Performance Validity Tests
Psychological Measurement and Testing

Evaluating Nonmemory-Based PVTs for More Accurate Neuropsychological Assessments

The article “Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests” by Webber, Critchfield, and Soble (2020) analyzes the effectiveness of nonmemory-based Performance Validity Tests (PVTs) in detecting noncredible performance during neuropsychological assessments. The study evaluates tools like the Dot Counting Test (DCT) and variations of the WAIS-IV Digit …

The Role of Item Distributions in Reliability Estimation
Statistical Methods and Data Analysis

The Role of Item Distributions in Reliability Estimation

Olvera Astivia, Kroc, and Zumbo’s (2020) study examines the assumptions underlying Cronbach’s coefficient alpha and how the distribution of items affects reliability estimation. By introducing a new framework rooted in Fréchet-Hoeffding bounds, the authors offer a fresh perspective on the limitations of this widely used reliability measure. Their work provides …