# Difference between revisions of "Classical test theory"

en>Life of Riley (Removed "It should be noted that" in accordance with WP:NOTED.) |
en>Fgnievinski |
||

(One intermediate revision by one other user not shown) | |||

Line 1: | Line 1: | ||

− | |||

{{Refimprove|date=July 2007}} | {{Refimprove|date=July 2007}} | ||

− | '''Classical test theory''' is a body of related [[psychometric]] theory that | + | '''Classical test theory''' is a body of related [[psychometric]] theory that predicts outcomes of psychological [[Test (assessment)|test]]ing such as the difficulty of items or the ability of test-takers. Generally speaking, the aim of classical test theory is to understand and improve the [[Reliability (psychometric)|reliability]] of psychological tests. |

''Classical test theory'' may be regarded as roughly synonymous with ''true score theory''. The term "classical" refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as [[item response theory]], which sometimes bear the appellation "modern" as in "modern latent trait theory". | ''Classical test theory'' may be regarded as roughly synonymous with ''true score theory''. The term "classical" refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as [[item response theory]], which sometimes bear the appellation "modern" as in "modern latent trait theory". | ||

− | Classical test theory as we know it today was codified by Novick (1966) and described in classic texts such as Lord & Novick 1968) and Allen & Yen (1979/2002). The description of classical test theory below follows these seminal publications. | + | Classical test theory as we know it today was codified by Novick (1966) and described in classic texts such as Lord & Novick (1968) and Allen & Yen (1979/2002). The description of classical test theory below follows these seminal publications. |

+ | |||

+ | ==History== | ||

+ | Classical Test Theory was born only after the following 3 achievements or ideas were conceptualized: one, a recognition of the presence of errors in measurements, two, a conception of that error as a random variable, and third, a conception of correlation and how to index it. In 1904, [[Charles Spearman]] was responsible for figuring out how to correct a correlation coefficient for attenuation due to measurement error and how to obtain the index of reliability needed in making the correction.<ref>Traub, R. (1997). Classical Test Theory in Historical Perspective. ''[[Educational Measurement: Issues and Practice]]'' 16 (4), 8-14. doi:doi:10.1111/j.1745-3992.1997.tb00603.x</ref> Spearman's finding is thought to be the beginning of Classical Test Theory by some (Traub, 1997). Others who had an influence in the Classical Test Theory's framework include: [[George Udny Yule]], [[Truman Lee Kelley]], those involved in making the [[Kuder-Richardson Formulas]], [[Louis Guttman]], and, most recently, [[Melvin Novick]], not to mention others over the next quarter century after Spearman's initial findings. | ||

==Definitions== | ==Definitions== | ||

Line 25: | Line 27: | ||

==Evaluating tests and scores: Reliability== | ==Evaluating tests and scores: Reliability== | ||

+ | {{main|Reliability (psychometrics)}} | ||

− | Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be obtained by various means. One way of estimating reliability is by constructing a so-called ''parallel test''. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that | + | Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be obtained by various means. One way of estimating reliability is by constructing a so-called ''[[parallel test]]''. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that |

<math>{\varepsilon}(X_i)={\varepsilon}(X'_i)</math> | <math>{\varepsilon}(X_i)={\varepsilon}(X'_i)</math> | ||

Line 43: | Line 46: | ||

</math> | </math> | ||

− | Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's <math>{\alpha}</math>. Consider a test consisting of <math>k</math> items <math>u_{j}</math>, <math>j=1,\ldots,k</math>. The total test score is defined as the sum of the individual item scores, so that for individual <math>i</math> | + | Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as [[Cronbach's alpha|Cronbach's <math>{\alpha}</math>]]. Consider a test consisting of <math>k</math> items <math>u_{j}</math>, <math>j=1,\ldots,k</math>. The total test score is defined as the sum of the individual item scores, so that for individual <math>i</math> |

<math>X_{i}=\sum_{j=1}^{k}{U_{ij}}</math> | <math>X_{i}=\sum_{j=1}^{k}{U_{ij}}</math> | ||

Line 51: | Line 54: | ||

<math> \alpha =\frac{k}{k-1}\left(1-\frac{\sum_{j=1}^{k}{\sigma^{2}_{U_{j}}}}{\sigma^2_{X}}\right)</math> | <math> \alpha =\frac{k}{k-1}\left(1-\frac{\sum_{j=1}^{k}{\sigma^{2}_{U_{j}}}}{\sigma^2_{X}}\right)</math> | ||

− | Cronbach's <math>{\alpha}</math> can be shown to provide a lower bound for reliability under rather mild assumptions. Thus, the reliability of test scores in a population is always higher than the value of Cronbach's <math>{\alpha}</math> in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's <math>{\alpha}</math> is included in many standard statistical packages such as [[SPSS]] and [[SAS System|SAS]].<ref name="Lei2007">{{cite journal| title=CTTITEM: SAS macro and SPSS syntax for classical item analysis| author=Pui-Wa Lei and Qiong Wu| journal=Behavior Research Methods| volume=39| issue=3| year=2007| pages=527–530| url=http://brm.psychonomic-journals.org/content/39/3/527.full.pdf| pmid=17958163| doi=10.3758/BF03193021}}</ref> | + | Cronbach's <math>{\alpha}</math> can be shown to provide a lower bound for reliability under rather mild assumptions. {{citation needed|date=October 2013}} Thus, the reliability of test scores in a population is always higher than the value of Cronbach's <math>{\alpha}</math> in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's <math>{\alpha}</math> is included in many standard statistical packages such as [[SPSS]] and [[SAS System|SAS]].<ref name="Lei2007">{{cite journal| title=CTTITEM: SAS macro and SPSS syntax for classical item analysis| author=Pui-Wa Lei and Qiong Wu| journal=Behavior Research Methods| volume=39| issue=3| year=2007| pages=527–530| url=http://brm.psychonomic-journals.org/content/39/3/527.full.pdf| pmid=17958163| doi=10.3758/BF03193021}}</ref> |

As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for <math>{\alpha}</math>, say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing.<ref>{{Cite journal | doi = 10.1207/S15327752JPA8001_18 | last1 = Streiner | first1 = D. L. | year = 2003 | title = Starting at the Beginning: An Introduction to Coefficient Alpha and Internal Consistency | url = | journal = Journal of Personality Assessment | volume = 80 | issue = 1| pages = 99–103 | pmid = 12584072 }}</ref> These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear. | As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for <math>{\alpha}</math>, say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing.<ref>{{Cite journal | doi = 10.1207/S15327752JPA8001_18 | last1 = Streiner | first1 = D. L. | year = 2003 | title = Starting at the Beginning: An Introduction to Coefficient Alpha and Internal Consistency | url = | journal = Journal of Personality Assessment | volume = 80 | issue = 1| pages = 99–103 | pmid = 12584072 }}</ref> These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear. | ||

==Evaluating items: P and item-total correlations== | ==Evaluating items: P and item-total correlations== | ||

− | Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the [[item-total correlation]] ([[ | + | Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the [[P-value]] (proportion) and the [[item-total correlation]] ([[point-biserial correlation coefficient]]). The [[P-value]] represents the proportion of examinees responding in the keyed direction, and is typically referred to as ''item difficulty''. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as ''item discrimination''. In addition, these statistics are calculated for each response of the oft-used [[multiple choice]] item, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed [[psychometric software]]. |

==Alternatives== | ==Alternatives== | ||

− | Classical test theory is an influential theory of test scores in the social sciences. In psychometrics, the theory has been superseded by the more sophisticated models in [[Item Response Theory]] (IRT) and [[Generalizability theory]] (G-theory). However, IRT is not included in standard statistical packages like [[SPSS]] and [[SAS System|SAS]], | + | Classical test theory is an influential theory of test scores in the social sciences. In psychometrics, the theory has been superseded by the more sophisticated models in [[Item Response Theory]] (IRT) and [[Generalizability theory]] (G-theory). However, IRT is not included in standard statistical packages like [[SPSS]] and [[SAS System|SAS]], but there are IRT packages for the open source statistical programming language [[R (programming language)|R]] (e.g., CTT). While commercial packages routinely provide estimates of Cronbach's <math>{\alpha}</math>, specialized [[psychometric software]] may be preferred for IRT or G-theory. However, general statistical packages often do not provide a complete classical analysis (Cronbach's <math>{\alpha}</math> is only one of many important statistics), and in many cases, specialized software for classical analysis is also necessary. |

+ | |||

+ | ==Shortcomings of Classical Test Theory== | ||

+ | One of the most important or well known shortcomings of Classical Test Theory is that examinee characteristics and test characteristics cannot be separated: each can only be interpreted in the context of the other. Another shortcoming lies in the definition of Reliability that exists in Classical Test Theory, which states that reliability is "the correlation between test scores on parallel forms of a test".<ref name="Hambleton, R. 1991">Hambleton, R., Swaminathan, H., Rogers, H. (1991). ''Fundamentals of Item Response Theory''. Newbury Park, California: Sage Publications, Inc.</ref> The problem with this is that there are differing opinions of what parallel tests are. Various reliability coefficients provide either lower bound estimates of reliability or reliability estimates with unknown biases. A third shortcoming involves the standard error of measurement. The problem here is that, according to Classical Test Theory, the standard error of measurement is assumed to be the same for all examinees. However, as Hambleton explains in his book, scores on any test are unequally precise measures for examinees of different ability, thus making the assumption of equal errors of measurement for all examinees implausible (Hambleton, Swaminathan, Rogers, 1991, p. 4). A fourth, and final shortcoming of the Classical Test Theory is that it is test oriented, rather than item oriented. In other words, Classical Test Theory cannot help us make predictions of how well an individual or even a group of examinees might do on a test item.<ref name="Hambleton, R. 1991"/> | ||

==Notes== | ==Notes== | ||

Line 67: | Line 73: | ||

* <cite id=Allen2002>Allen, M.J., & Yen, W. M. (2002). ''Introduction to Measurement Theory.'' Long Grove, IL: Waveland Press.</cite> | * <cite id=Allen2002>Allen, M.J., & Yen, W. M. (2002). ''Introduction to Measurement Theory.'' Long Grove, IL: Waveland Press.</cite> | ||

* <cite id=Novick1966>Novick, M.R. (1966) ''The axioms and principal results of classical test theory'' Journal of Mathematical Psychology Volume 3, Issue 1, February 1966, Pages 1-18</cite> | * <cite id=Novick1966>Novick, M.R. (1966) ''The axioms and principal results of classical test theory'' Journal of Mathematical Psychology Volume 3, Issue 1, February 1966, Pages 1-18</cite> | ||

− | * Lord, F. M. & Novick, M. R. (1968). ''Statistical theories of mental test scores.'' Reading MA: Addison-Welsley Publishing Company | + | * Lord, F. M. & Novick, M. R. (1968). ''Statistical theories of mental test scores.'' Reading MA: Addison-Welsley Publishing Company |

− | == Further reading == | + | == Further reading == |

*{{Cite book |title=Psychological Testing: History, Principles, and Applications |last=Gregory |first=Robert J. |year=2011 |edition=Sixth |publisher=Allyn & Bacon |location=Boston |isbn=978-0-205-78214-7 |page= |pages= |laysummary=http://www.pearsonhighered.com/bookseller/product/Psychological-Testing-History-Principles-and-Applications-6E/9780205782147.page |laydate=7 November 2010 |ref=harv }} | *{{Cite book |title=Psychological Testing: History, Principles, and Applications |last=Gregory |first=Robert J. |year=2011 |edition=Sixth |publisher=Allyn & Bacon |location=Boston |isbn=978-0-205-78214-7 |page= |pages= |laysummary=http://www.pearsonhighered.com/bookseller/product/Psychological-Testing-History-Principles-and-Applications-6E/9780205782147.page |laydate=7 November 2010 |ref=harv }} | ||

Line 89: | Line 95: | ||

[[Category:Statistical models]] | [[Category:Statistical models]] | ||

[[Category:Comparison of assessments]] | [[Category:Comparison of assessments]] | ||

− | |||

− | |||

− | |||

− |

## Latest revision as of 21:19, 1 November 2014

{{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }}

**Classical test theory** is a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers. Generally speaking, the aim of classical test theory is to understand and improve the reliability of psychological tests.

*Classical test theory* may be regarded as roughly synonymous with *true score theory*. The term "classical" refers not only to the chronology of these models but also contrasts with the more recent psychometric theories, generally referred to collectively as item response theory, which sometimes bear the appellation "modern" as in "modern latent trait theory".

Classical test theory as we know it today was codified by Novick (1966) and described in classic texts such as Lord & Novick (1968) and Allen & Yen (1979/2002). The description of classical test theory below follows these seminal publications.

## Contents

## History

Classical Test Theory was born only after the following 3 achievements or ideas were conceptualized: one, a recognition of the presence of errors in measurements, two, a conception of that error as a random variable, and third, a conception of correlation and how to index it. In 1904, Charles Spearman was responsible for figuring out how to correct a correlation coefficient for attenuation due to measurement error and how to obtain the index of reliability needed in making the correction.^{[1]} Spearman's finding is thought to be the beginning of Classical Test Theory by some (Traub, 1997). Others who had an influence in the Classical Test Theory's framework include: George Udny Yule, Truman Lee Kelley, those involved in making the Kuder-Richardson Formulas, Louis Guttman, and, most recently, Melvin Novick, not to mention others over the next quarter century after Spearman's initial findings.

## Definitions

Classical test theory assumes that each person has a *true score*,*T*, that would be obtained if there were no errors in measurement. A person's true score is defined as the expected number-correct score over an infinite number of independent administrations of the test. Unfortunately, test users never observe a person's true score, only an *observed score*, *X*. It is assumed that *observed score* = *true score* plus some *error*:

X = T + E observed score true score error

Classical test theory is concerned with the relations between the three variables , , and in the population. These relations are used to say something about the quality of test scores. In this regard, the most important concept is that of *reliability*. The reliability of the observed test scores , which is denoted as , is defined as the ratio of true score variance to the observed score variance :

Because the variance of the observed scores can be shown to equal the sum of the variance of true scores and the variance of error scores, this is equivalent to

This equation, which formulates a signal-to-noise ratio, has intuitive appeal: The reliability of test scores becomes higher as the proportion of error variance in the test scores becomes lower and vice versa. The reliability is equal to the proportion of the variance in the test scores that we could explain if we knew the true scores. The square root of the reliability is the correlation between true and observed scores.

## Evaluating tests and scores: Reliability

{{#invoke:main|main}}

Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be obtained by various means. One way of estimating reliability is by constructing a so-called *parallel test*. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that

and

Under these assumptions, it follows that the correlation between parallel test scores is equal to reliability (see Lord & Novick, 1968, Ch. 2, for a proof).

Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's . Consider a test consisting of items , . The total test score is defined as the sum of the individual item scores, so that for individual

Then Cronbach's alpha equals

Cronbach's can be shown to provide a lower bound for reliability under rather mild assumptions. {{ safesubst:#invoke:Unsubst||date=__DATE__ |$B=
{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}^{[citation needed]}
}} Thus, the reliability of test scores in a population is always higher than the value of Cronbach's in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's is included in many standard statistical packages such as SPSS and SAS.^{[2]}

As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for , say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing.^{[3]} These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear.

## Evaluating items: P and item-total correlations

Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the item-total correlation (point-biserial correlation coefficient). The P-value represents the proportion of examinees responding in the keyed direction, and is typically referred to as *item difficulty*. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as *item discrimination*. In addition, these statistics are calculated for each response of the oft-used multiple choice item, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed psychometric software.

## Alternatives

Classical test theory is an influential theory of test scores in the social sciences. In psychometrics, the theory has been superseded by the more sophisticated models in Item Response Theory (IRT) and Generalizability theory (G-theory). However, IRT is not included in standard statistical packages like SPSS and SAS, but there are IRT packages for the open source statistical programming language R (e.g., CTT). While commercial packages routinely provide estimates of Cronbach's , specialized psychometric software may be preferred for IRT or G-theory. However, general statistical packages often do not provide a complete classical analysis (Cronbach's is only one of many important statistics), and in many cases, specialized software for classical analysis is also necessary.

## Shortcomings of Classical Test Theory

One of the most important or well known shortcomings of Classical Test Theory is that examinee characteristics and test characteristics cannot be separated: each can only be interpreted in the context of the other. Another shortcoming lies in the definition of Reliability that exists in Classical Test Theory, which states that reliability is "the correlation between test scores on parallel forms of a test".^{[4]} The problem with this is that there are differing opinions of what parallel tests are. Various reliability coefficients provide either lower bound estimates of reliability or reliability estimates with unknown biases. A third shortcoming involves the standard error of measurement. The problem here is that, according to Classical Test Theory, the standard error of measurement is assumed to be the same for all examinees. However, as Hambleton explains in his book, scores on any test are unequally precise measures for examinees of different ability, thus making the assumption of equal errors of measurement for all examinees implausible (Hambleton, Swaminathan, Rogers, 1991, p. 4). A fourth, and final shortcoming of the Classical Test Theory is that it is test oriented, rather than item oriented. In other words, Classical Test Theory cannot help us make predictions of how well an individual or even a group of examinees might do on a test item.^{[4]}

## Notes

- ↑ Traub, R. (1997). Classical Test Theory in Historical Perspective.
*Educational Measurement: Issues and Practice*16 (4), 8-14. doi:doi:10.1111/j.1745-3992.1997.tb00603.x - ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑
^{4.0}^{4.1}Hambleton, R., Swaminathan, H., Rogers, H. (1991).*Fundamentals of Item Response Theory*. Newbury Park, California: Sage Publications, Inc.

## References

- Allen, M.J., & Yen, W. M. (2002).
*Introduction to Measurement Theory.*Long Grove, IL: Waveland Press. - Novick, M.R. (1966)
*The axioms and principal results of classical test theory*Journal of Mathematical Psychology Volume 3, Issue 1, February 1966, Pages 1-18 - Lord, F. M. & Novick, M. R. (1968).
*Statistical theories of mental test scores.*Reading MA: Addison-Welsley Publishing Company

## Further reading

- {{#invoke:citation/CS1|citation

|CitationClass=book }}

- {{#invoke:citation/CS1|citation

|CitationClass=book }}