EduMP-703 Instrument Development and Data Analysis

Objectives

After studying the course, the students will be able to:

  1. Comprehend the basic concepts of Instrument development and data analysis
  2. Develop different types of research instruments
  3. Understand the statistical concepts more frequently applied in
  4. Apply various statistical techniques in analyzing research data in Education
  5. Apply appropriate statistics in qualitative and quantitative researches
  6. Use SPSS for descriptive and inferential statistics.

 

Course Outline

 

Part-I Instrument Development

Unit –1:    Educational and psychological measurement –Basic concept.

 

  • Measurement in social sciences- psycho entry of edumentry
    • Role of mathematic and statistics.
    • Validation of standardization of measures.
    • Advantages of standardization measures (objectivity, quantification, communication, economy, of scientific generation)

 

  • Measurement Scales     (Nature of variables)
  • Types based on level of measurement
  • Decision about measurement scales.
  • Classifications as measurement
  • Recent Trends in measurement
  • Traditional Approaches to Scaling.

 

  • Evaluation of Models
  • Scaling stimuli versus scaling people
  • Psychophysics and Psychophysical  Scaling
  • Types of stimuli and responses
  • Judgments versus sentiments
  • Absolutes versus comparative responses
  • Responses versus  similarity responses
  • Specified versus similarity unspecified

 

  • Methods for Converting Responses to Stimuli Scales.
  • Ordinal,  interval  and  Ratio methods
  • Deterministic models for Scaling
  • Probabilistic models for scaling

 

Unit –2:        Types of Scales/ measures

  • Thurston scales
  • Likert scales
  • Symantic differential Scales
  • Multidimensional scales
  • Others types of scales e.g Bipolar
  • Single items vs multi items scales

 

Unit –3:    Paradigms of Approaches to Scales Development:

  • Churchill’s paradigms
  • Anderson (1977) and Garbing’s (1988) paradigms
  • Loewenthal(1996)  Approach
  • Electic  Approach          

 

Unit –4 :   An Overview of Psychometric Properties of a Scale

  • Psychometric properties of  a scale
  • Internal consistency
  • Reliability
  • Validity
  • Dimensionality
  • Stability  of dimensionality (factor structure)
  • Scale length (No. of items)
  • Validation
  • Standardization

 

Unit - 5:    Assessment of Scale Reliability and Validity

  • Concept of the reliability of  a scale
  • Sources of error
  • Estimate(cronbach)  of various types of reliability and  their cronbach
  • Internal consistency/ coefficient alpha
  • Test retest reliability, Split half reliability
  • uses of the reliability coefficient
  • analysis of various (ANOVA)  Approach to reliability
  • generalizibility theory
  • Scale validity – Basic concept of general consideration
  • Types of validity
  • Explication of construct
  • Issues concerning validity (Relation among various types, nomenclature / different names and place of factor analysis).

 

Unit –6: Constructions of Conversation Measures of Tests: Classical Test Theory.

  • Construction of  test  design for content validation   
  •  Construction of  test  design for construct validation        
  •   Construction of  test  design for predictive validation
  • Problems of certain testing situations
  • Reversing the direction to keying
  • Unipolar    vs  Bipolar  attributes
  • Discrimination at a point
  • Equidiscriminating  tests
  • Weighting of items
  • Taking adventure of chance
  • Special problems in classical test theory
  • Guessing speed tests

 

 

Part-II Data Analysis

Unit –1 Data Analysis

  • Data and its types

Unit –2 Analysis of Quantitative data through SPSS

  • Descriptive statistics
  • Measures of central tendency and  variability
  • Measures of relationship
  • Inferential statistics (correlation + regression)
  • Hypothesis testing ; the null hypothesis; one and two tailed tests ; use of null hypotheses
  • Parametric vs. nonparametric techniques
  • Carrying out parametric statistical tests: t-distribution, z-test, ANOVA and ANCOVA.
  • Carrying out non-parametric statistical tests: Chi Square test
  • The role of statistical analysis
  • Selecting an appropriate statistical analysis
  • Coding and inputting data

Unit –3: Data Analysis in Qualitative Research

  • Analysis of data in the field:
    • Field memos
    • Discovering themes and hypotheses
    • More about analysis in the field
  • Analysis after data collection:
  • Coding and coding categories
  • Developing coding categories
  • Influence on coding and analysis
  • Data displays etc
  • Mechanics of working with data
  • Using a computer for analysis

Suggested Readings:

 

Bogdan, R. and Taylor, S. I. (1975). Introduction to qualitative research methods: A phenomenological approach to the social sciences. New York: John Willey and Sons.

Bogdan, R. C. and Bicklen, S. K. (1982). Qualitative research for education: An introduction to theory and methods. Boston: Allyn and Bacon, Inc.

Bordens , K.S.and Abbot , B.B.(2002).Research design and methods : A process approach . (5th  ed.) . Boston: McGraw-Hill.

Cohen, L. &  Manion, L. (1991). Research methods in education. London: Routledge.

Flick, U. (2002). An introduction to qualitative research. London: SAGE Publications.

Frankel, J. R. & Wallen, N.E. (1993). How to design and evaluate research in education. NY: McGraw-Hill.

Kerlinger, F. N. (1973). Foundations of behavioral research . New York: Holt, Rinehart and Winston, Inc.

Lecompte, M. D. , Milroy, W. L. and Preissle, J. (Ed). (1992). The handbook of qualitative research in education. San Diago: Academic Press.

Merriam, S.B. et al (2002). Qualitative research in practice. San Francisco: Jossey-Bass

Sinha, B.L. (Ed.). (2001). Statistics in psychology and education. New Delhi: Anmol Publications.

Wiersma, W. (1995). Research methods in education: An introduction. Boston: Allyn and Bacon.

 

Course Material