Objectives
After successful completion of this course, the students will be able to:
Contents
1 |
Standard Setting for Testing |
||
|
1.1 |
Definitions and concept of standard setting |
|
|
1.2 |
Standard setting: An enduring need |
|
|
1.3 |
General approaches to standard setting |
|
|
1.4 |
Standard setting |
|
|
|
1.4.1 |
Policy issues |
|
|
1.4.2 |
Item scoring criteria |
|
|
1.4.3 |
Total test performance standards |
|
1.5 |
Benefits of standard setting |
2 |
Common Elements in Setting Performance Standards |
|
|
2.1 |
Purpose |
|
2.2 |
Choosing a standard setting method |
|
2.3 |
Performance level labels and descriptions |
|
2.4 |
Key conceptualizations |
|
2.5 |
Selecting and training participants |
|
2.6 |
Professional guidelines for standard setting |
|
2.7 |
Evaluating standard setting |
|
2.8 |
Providing feed back to participants |
3 |
Development of Table of Specifications |
||||
|
3.1 |
Bloom Taxonomy |
|||
|
3.2 |
Solo Taxonomy |
|||
4 |
Test Development |
||||
|
4.1 |
Types of Tests |
|||
|
4.2 4.3 4.4 |
Extended Response questions Constructed Response Questions Types of Essay Tests |
|||
|
4.5 |
Developing test items |
|||
|
4.6 |
Improving test items through repeated reviews and experts’ opinions |
|||
5 |
Item Analysis |
||||
|
5.1 |
Definition, advantages and limitations of item analysis |
|||
|
5.2 |
Test characteristics (Difficulty level, discrimination index, destructor power etc) |
|||
|
5.3 |
Reviewing and marking the tests (rubrics) |
|||
|
5.4 |
Item analysis by using Iteman, Quest or other softwares |
|||
|
5.5 |
Ensuring validity and reliability of test items |
|||
|
5.6 |
Test administration and assembling |
|||
|
5.7 |
Difference between NRT and CRT item analysis |
|||
|
5.8 |
Practicum on use of item analysis (Demonstration |
6 |
Scoring of Extended Response Questions ( Essay type) |
|
|
6.1 6.2 |
Scoring standards for Essay Type tests Use of Command words in constructing Marking Scheme |
|
6.2 |
Inter Rater Reliability |
7 |
Scoring Objective Type Tests |
|
|
7.1 |
Item analysis |
|
7.2 |
Difficulty Level |
|
7.3 |
Discriminatory Power |
8 | Process of Test Standardization | |
9 | Testing Higher Order Learning | |
|
9.1 Development of Rubrics 9.2 Use of Rubrics |
|
10 |
Seminar on issues in Test Construction and Standardization |
Suggested Readings:
Camilli, G., Cizek, G. J. and Lugg, C. A. Psychometric theory and the validation of performance standards: History and future perspectives.
Cizek, G. J. and Bunch, M. B. (2007) A guide to establishing and evaluating performance standards on tests.
Cizek, G. J. and Sternberg, R. J. (2001) Setting performance standards: Concepts, methods, and perspectives. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers.
Ricker, K. L. (2006) Setting cut scores: Critical review of Angoff and Modified Angoff Method (Available online at: http://www.education.ualberta.ca/educ/psych, retrieved on 15 January 2007.
Sireci, S. G. and Clauser, B. E. (1999) Practical issues in setting standards on Computerized Adaptive Tests.
Journals in the Area of Assessment and Evaluation
Applied Measurement in Education
Educational Measurement
Educational Measurement: Issues and Practice
Journal of Educational Measurement
Practical Assessment, Research and Evaluation (PARE)
Quality in Higher Education
Review of Educational Research
Studies in Higher Education
Useful Websites
ACT research reports: http://act.org/research/reports/index.html
Buros Institute: http://www.unl.edu/buros/
Educational testing Service: http://www.ets.org/research/index.html
NCME : http://www.ncme.org/
PISA : http://www.pisa.occd.org/
TIMSS/PIRLS: http://tims.bc.edu/
PBS TeacherLine http://www.cnets.org/teachers/t_stands.html
Journal of PARE http://pareonline.net/
Standard Setting http://www.education.ualberta.ca/educ/psych
ASSESSMENT CRITERIA
Sessional marks= 20
Mid Term Exma= 30
Final Term Exam= 50