Jump to content

Test validity: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
overviews (or "introductions") are discouraged , we should not imply an opinion as fact
Yobot (talk | contribs)
m clean up, References after punctuation per WP:REFPUNC and WP:PAIC using AWB (8748)
Line 1: Line 1:
'''Test validity''' concerns the test and assessment procedures used in [[Psychological testing|psychological]] and educational testing, and the extent to which these measure what they purport to measure. “Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests.”<ref name="1999standards">American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999) ''Standards for educational and psychological testing''. Washington, DC: American Educational Research Association.</ref> Although classical models divided the concept into various "validities" (such as [[content validity]], [[criterion validity]], and [[construct validity]])<ref name="guion1980">Guion, R. M. (1980). On trinitarian doctrines of validity. ''Professional Psychology, 11'', 385-398.</ref>, the currently dominant view is that validity is a single unitary construct.<ref name="messick1995">Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. ''American Psychologist, 50'', 741-749.</ref>
'''Test validity''' concerns the test and assessment procedures used in [[Psychological testing|psychological]] and educational testing, and the extent to which these measure what they purport to measure. “Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests.”<ref name="1999standards">American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999) ''Standards for educational and psychological testing''. Washington, DC: American Educational Research Association.</ref> Although classical models divided the concept into various "validities" (such as [[content validity]], [[criterion validity]], and [[construct validity]]),<ref name="guion1980">Guion, R. M. (1980). On trinitarian doctrines of validity. ''Professional Psychology, 11'', 385-398.</ref> the currently dominant view is that validity is a single unitary construct.<ref name="messick1995">Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. ''American Psychologist, 50'', 741-749.</ref>


Validity is generally considered the most important issue in psychological and educational testing<ref name="popham2008">Popham, W. J. (2008). All About Assessment / A Misunderstood Grail. ''Educational Leadership, 66''(1), 82-83.</ref> because it concerns the meaning placed on test results.<ref name="messick1995" /> Though many textbooks present validity as a static construct<ref>See the otherwise excellent text: Nitko, J.J., Brookhart, S. M. (2004). ''Educational assessment of students''. Upper Saddle River, NJ: Merrill-Prentice Hall. </ref>, various models of validity have evolved since the first published recommendations for constructing psychological and education tests<ref name="1954recommendations">American Psychological Association, American Educational Research Association, & National Council on Measurement in Education. (1954). ''Technical recommendations for psychological tests and diagnostic techniques''. Washington, DC: The Association.</ref>. These models can be categorized into two primary groups: classical models, which include several types of validity, and modern models, which present validity as a single construct. The modern models reorganize classical "validities" into either "aspects" of validity<ref name="messick1995" /> or types of validity-supporting evidence<ref name="1999standards" />
Validity is generally considered the most important issue in psychological and educational testing<ref name="popham2008">Popham, W. J. (2008). All About Assessment / A Misunderstood Grail. ''Educational Leadership, 66''(1), 82-83.</ref> because it concerns the meaning placed on test results.<ref name="messick1995" /> Though many textbooks present validity as a static construct,<ref>See the otherwise excellent text: Nitko, J.J., Brookhart, S. M. (2004). ''Educational assessment of students''. Upper Saddle River, NJ: Merrill-Prentice Hall.</ref> various models of validity have evolved since the first published recommendations for constructing psychological and education tests.<ref name="1954recommendations">American Psychological Association, American Educational Research Association, & National Council on Measurement in Education. (1954). ''Technical recommendations for psychological tests and diagnostic techniques''. Washington, DC: The Association.</ref> These models can be categorized into two primary groups: classical models, which include several types of validity, and modern models, which present validity as a single construct. The modern models reorganize classical "validities" into either "aspects" of validity<ref name="messick1995" /> or types of validity-supporting evidence<ref name="1999standards" />


== Historical background ==
== Historical background ==


Although psychologists and educators were aware of several facets of validity before World War II, their methods for establishing validity were commonly restricted to correlations of test scores with some known criterion<ref name="angoff1988">Angoff, W. H. (1988). Validity: An evolving concept. In [[Howard Wainer | H. Wainer]] & H. Braun (Eds.), ''Test Validity'' (pp. 19-32). Hillsdale, NJ: Lawrence Erlbaum.</ref>. Under the direction of [[Lee Cronbach]], the 1954 ''Technical Recommendations for Psychological Tests and Diagnostic Techniques''<ref name="1954recommendations" /> attempted to clarify and broaden the scope of validity by dividing it into four parts: (a) [[concurrent validity]], (b) [[predictive validity]], (c) [[content validity]], and (d) [[construct validity]]. Cronbach and Meehl’s subsequent publication<ref name="cronbachmeehl1955">Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. ''Psychological Bulletin, 52'', 281-302.</ref> grouped predictive and concurrent validity into a "criterion-orientation", which eventually became [[criterion validity]].
Although psychologists and educators were aware of several facets of validity before World War II, their methods for establishing validity were commonly restricted to correlations of test scores with some known criterion.<ref name="angoff1988">Angoff, W. H. (1988). Validity: An evolving concept. In [[Howard Wainer|H. Wainer]] & H. Braun (Eds.), ''Test Validity'' (pp. 19-32). Hillsdale, NJ: Lawrence Erlbaum.</ref> Under the direction of [[Lee Cronbach]], the 1954 ''Technical Recommendations for Psychological Tests and Diagnostic Techniques''<ref name="1954recommendations" /> attempted to clarify and broaden the scope of validity by dividing it into four parts: (a) [[concurrent validity]], (b) [[predictive validity]], (c) [[content validity]], and (d) [[construct validity]]. Cronbach and Meehl’s subsequent publication<ref name="cronbachmeehl1955">Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. ''Psychological Bulletin, 52'', 281-302.</ref> grouped predictive and concurrent validity into a "criterion-orientation", which eventually became [[criterion validity]].


Over the next four decades, many theorists, including Cronbach himself<ref>Cronbach, L. J. (1969). Validation of educational measures. ''Proceedings of the 1969 Invitational Conference on Testing Problems. Princeton'', NJ: Educational Testing Service, 35-52.</ref>, voiced their dissatisfaction with this three-in-one model of validity<ref>Loevinger, J. (1957). Objective tests as instruments of psychological theory. ''Psychological Reports, 3'', 634-694.</ref><ref>Tenopyr, M. L. (1977). Content-construct confusion. ''Personnel Psychology, 30'', 47-54.</ref><ref>Guion, R. M. (1977). Content validity–The source of my discontent. ''Applied Psychological Measurement, 1'', 1-10.</ref>. Their arguments culminated in [[Samuel_Messick|Samuel Messick’s]] 1995 article that described validity as a single construct composed of six "aspects"<ref name="messick1995" />. In his view, various inferences made from test scores may require different types of evidence, but not different validities.
Over the next four decades, many theorists, including Cronbach himself,<ref>Cronbach, L. J. (1969). Validation of educational measures. ''Proceedings of the 1969 Invitational Conference on Testing Problems. Princeton'', NJ: Educational Testing Service, 35-52.</ref> voiced their dissatisfaction with this three-in-one model of validity.<ref>Loevinger, J. (1957). Objective tests as instruments of psychological theory. ''Psychological Reports, 3'', 634-694.</ref><ref>Tenopyr, M. L. (1977). Content-construct confusion. ''Personnel Psychology, 30'', 47-54.</ref><ref>Guion, R. M. (1977). Content validity–The source of my discontent. ''Applied Psychological Measurement, 1'', 1-10.</ref> Their arguments culminated in [[Samuel Messick|Samuel Messick’s]] 1995 article that described validity as a single construct composed of six "aspects".<ref name="messick1995" /> In his view, various inferences made from test scores may require different types of evidence, but not different validities.


The 1999 ''Standards for Educational and Psychological Testing''<ref name="1999standards" /> largely codified Messick’s model. They describe five types of validity-supporting evidence that incorporate each of Messick’s aspects, and make no mention of the classical models’ content, criterion, and construct validities.
The 1999 ''Standards for Educational and Psychological Testing''<ref name="1999standards" /> largely codified Messick’s model. They describe five types of validity-supporting evidence that incorporate each of Messick’s aspects, and make no mention of the classical models’ content, criterion, and construct validities.
Line 13: Line 13:
== Validation process ==
== Validation process ==


According to the ''1999 Standards''<ref name="1999standards" />, validation is the process of gathering evidence to provide “a sound scientific basis” for interpreting the scores as proposed by the test developer and/or the test user. Validation therefore begins with a framework that defines the scope and aspects (in the case of multi-dimensional scales) of the proposed interpretation. The framework also includes a rational justification linking the interpretation to the test in question.
According to the ''1999 Standards'',<ref name="1999standards" /> validation is the process of gathering evidence to provide “a sound scientific basis” for interpreting the scores as proposed by the test developer and/or the test user. Validation therefore begins with a framework that defines the scope and aspects (in the case of multi-dimensional scales) of the proposed interpretation. The framework also includes a rational justification linking the interpretation to the test in question.


Validity researchers then list a series of propositions that must be met if the interpretation is to be valid. Or, conversely, they may compile a list of issues that may threaten the validity of the interpretations. In either case the researchers proceed by gathering evidence – be it original empirical research, meta-analysis or review of existing literature, or logical analysis of the issues – to support or to question the interpretation’s propositions (or the threats to the interpretation’s validity). Emphasis is placed on quality, rather than quantity, of the evidence.
Validity researchers then list a series of propositions that must be met if the interpretation is to be valid. Or, conversely, they may compile a list of issues that may threaten the validity of the interpretations. In either case the researchers proceed by gathering evidence – be it original empirical research, meta-analysis or review of existing literature, or logical analysis of the issues – to support or to question the interpretation’s propositions (or the threats to the interpretation’s validity). Emphasis is placed on quality, rather than quantity, of the evidence.
Line 37: Line 37:
{{psychology}}
{{psychology}}
{{education}}
{{education}}

[[Category:Validity (statistics)]]
[[Category:Validity (statistics)]]



Revision as of 20:25, 5 December 2012

Test validity concerns the test and assessment procedures used in psychological and educational testing, and the extent to which these measure what they purport to measure. “Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests.”[1] Although classical models divided the concept into various "validities" (such as content validity, criterion validity, and construct validity),[2] the currently dominant view is that validity is a single unitary construct.[3]

Validity is generally considered the most important issue in psychological and educational testing[4] because it concerns the meaning placed on test results.[3] Though many textbooks present validity as a static construct,[5] various models of validity have evolved since the first published recommendations for constructing psychological and education tests.[6] These models can be categorized into two primary groups: classical models, which include several types of validity, and modern models, which present validity as a single construct. The modern models reorganize classical "validities" into either "aspects" of validity[3] or types of validity-supporting evidence[1]

Historical background

Although psychologists and educators were aware of several facets of validity before World War II, their methods for establishing validity were commonly restricted to correlations of test scores with some known criterion.[7] Under the direction of Lee Cronbach, the 1954 Technical Recommendations for Psychological Tests and Diagnostic Techniques[6] attempted to clarify and broaden the scope of validity by dividing it into four parts: (a) concurrent validity, (b) predictive validity, (c) content validity, and (d) construct validity. Cronbach and Meehl’s subsequent publication[8] grouped predictive and concurrent validity into a "criterion-orientation", which eventually became criterion validity.

Over the next four decades, many theorists, including Cronbach himself,[9] voiced their dissatisfaction with this three-in-one model of validity.[10][11][12] Their arguments culminated in Samuel Messick’s 1995 article that described validity as a single construct composed of six "aspects".[3] In his view, various inferences made from test scores may require different types of evidence, but not different validities.

The 1999 Standards for Educational and Psychological Testing[1] largely codified Messick’s model. They describe five types of validity-supporting evidence that incorporate each of Messick’s aspects, and make no mention of the classical models’ content, criterion, and construct validities.

Validation process

According to the 1999 Standards,[1] validation is the process of gathering evidence to provide “a sound scientific basis” for interpreting the scores as proposed by the test developer and/or the test user. Validation therefore begins with a framework that defines the scope and aspects (in the case of multi-dimensional scales) of the proposed interpretation. The framework also includes a rational justification linking the interpretation to the test in question.

Validity researchers then list a series of propositions that must be met if the interpretation is to be valid. Or, conversely, they may compile a list of issues that may threaten the validity of the interpretations. In either case the researchers proceed by gathering evidence – be it original empirical research, meta-analysis or review of existing literature, or logical analysis of the issues – to support or to question the interpretation’s propositions (or the threats to the interpretation’s validity). Emphasis is placed on quality, rather than quantity, of the evidence.

A single interpretation of any test may require several propositions to be true (or may be questioned by any one of a set of threats to its validity). Strong evidence in support of a single proposition does not lessen the requirement to support the other propositions.

Evidence to support (or question) the validity of an interpretation can be categorized into one of five categories:

  1. Evidence based on test content
  2. Evidence based on response processes
  3. Evidence based on internal structure
  4. Evidence based on relations to other variables
  5. Evidence based on consequences of testing

Techniques to gather each type of evidence should only be employed when they yield information that would support or question the propositions required for the interpretation in question.

Each piece of evidence is finally integrated into a validity argument. The argument may call for a revision to the test, its administration protocol, or the theoretical constructs underlying the interpretations. If the test and/or the interpretations meant to be made of the test’s results are revised in any way, a new validation process must gather evidence to support the new version.

References

  1. ^ a b c d American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999) Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
  2. ^ Guion, R. M. (1980). On trinitarian doctrines of validity. Professional Psychology, 11, 385-398.
  3. ^ a b c d Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749.
  4. ^ Popham, W. J. (2008). All About Assessment / A Misunderstood Grail. Educational Leadership, 66(1), 82-83.
  5. ^ See the otherwise excellent text: Nitko, J.J., Brookhart, S. M. (2004). Educational assessment of students. Upper Saddle River, NJ: Merrill-Prentice Hall.
  6. ^ a b American Psychological Association, American Educational Research Association, & National Council on Measurement in Education. (1954). Technical recommendations for psychological tests and diagnostic techniques. Washington, DC: The Association.
  7. ^ Angoff, W. H. (1988). Validity: An evolving concept. In H. Wainer & H. Braun (Eds.), Test Validity (pp. 19-32). Hillsdale, NJ: Lawrence Erlbaum.
  8. ^ Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.
  9. ^ Cronbach, L. J. (1969). Validation of educational measures. Proceedings of the 1969 Invitational Conference on Testing Problems. Princeton, NJ: Educational Testing Service, 35-52.
  10. ^ Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 634-694.
  11. ^ Tenopyr, M. L. (1977). Content-construct confusion. Personnel Psychology, 30, 47-54.
  12. ^ Guion, R. M. (1977). Content validity–The source of my discontent. Applied Psychological Measurement, 1, 1-10.