Workshop A - Measurement Approaches to Exploring Survey Ratings and Rater Effects

Day One: Using Measurement Theory Methods to Model and Evaluate Survey Ratings


Learning Objectives:

After completing this workshop, participants will be able to:

  • Describe the major characteristics and theoretical motivations for polytomous measurement models that can be applied to survey ratings, including Rasch models, Mokken scaling models, and polytomous non-Rasch IRT models.
  • Make informed decisions about which modeling approaches may be appropriate for various survey research purposes and contexts.
  • Use pre-written code to estimate and extract key item- and person-related results from measurement models for survey ratings.
  • Interpret results from measurement model analyses to make informed decisions about item functioning, including rating scale functioning for individual items.
     

Participants will be provided with a set of pre-course materials that include selected book chapters and an introductory tutorial on installing and using R Studio. These materials can be used as a review and/or instructional materials for individuals who wish to participate in the course but do not have prior training related to the prerequisite skills.


Tentative Outline of Major Topics:

9-9:30 am

Welcome and introductions

Overview of schedule

Software check

9:30-10:45am

Theoretical overview of Rasch models for survey ratings (RSM, PCM, MFRM)

Interactive practice analysis and interpretation using R

Emphasis on parameter interpretation and rating scale analysis

11am-12pm

Theoretical overview of non-Rasch models for survey ratings (GPCM, GRM, Mokken scaling)

1:15-2pm

Interactive practice analysis with non-Rasch models using R

Emphasis on parameter interpretation and rating scale analysis

2:15-3:30pm

Practical issues: missing data/sparseness, differential item functioning, person misfit

Interactive practice opportunities will be interspersed with theoretical discussions

3:30-4pm

Overview of resources for more learning and support

Q&A or one-on-one assistance

Day Two: Measurement Approaches to Identifying and Exploring Rater Effects in Language Performance Assessments


Learning Objectives:

After completing this workshop, participants will be able to:

  • Describe major types of rater effects from a measurement modeling perspective.
  • Make informed decisions about which modeling approaches may be appropriate for evaluating ratings in various language assessment contexts
  • Use pre-written code to estimate and extract measurement model results related to various rater effects
  • Interpret results from rater analyses based on measurement models to make informed decisions about rating quality.


Tentative Outline of Major Topics:

9-9:30 am

Welcome and introductions

Overview of schedule

Software recheck

9:30-10:45am

Theoretical overview of rater effects

Theoretical overview of measurement models for evaluating raters (emphasis on MFRM)

Start interactive practice analysis and interpretation using R

Rater severity/leniency

Centrality/extremism

Rater misfit

Rater bias (differential rater functioning)

11am-12pm

Continue interactive practice analysis as needed

1:15-2pm

Practical issues part 1: Modeling rater effects with sparse designs

Interactive practice opportunities will be interspersed with theoretical discussions

2:15-3:30pm

Practical issues, part 2: Longitudinal designs with raters (equating and rater drift)

Interactive practice opportunities will be interspersed with theoretical discussions

3:30-4pm

Overview of resources for more learning and support

Q&A or one-on-one assistance

Speaker: Stefanie A. Wind, University of Alabama

Portrait Stefanie Wind

Stefanie A. Wind is an Associate Professor of Educational Measurement at the University of Alabama. She received her PhD in Educational Measurement from Emory University. Her primary research interests include the exploration of methodological issues in the field of educational measurement, with emphases on methods related to rater-mediated assessments, rating scales, latent trait models, and nonparametric item response theory. Her publications appear in methodological journals in the field of educational measurement as well as applied journals. She has authored and co-authored several books, including Exploring Rating Scale Functioning for Survey Research, Rasch Measurement Theory Analysis in R, and Invariant Measurement with Raters and Rating Scales. She has received awards for her research, including the Alicia Cascallar early career scholar award from the National Council on Measurement in Education and the Georg William Rasch Early Career Scholar award from the American Educational Research Association.

Logo Center for Applied Linguistics

This workshop is kindly sponsored by the Center for Applied Linguistics.

Nach oben scrollen