|
|
Line 1: |
Line 1: |
| In [[psychometrics]], '''content validity''' (also known as '''logical validity''') refers to the extent to which a measure represents all facets of a given [[social construct]]. For example, a depression scale may lack content validity if it only assesses the affective dimension of depression but fails to take into account the behavioral dimension. An element of subjectivity exists in relation to determining content validity, which requires a degree of agreement about what a particular personality trait such as [[extraversion]] represents. A disagreement about a personality trait will prevent the gain of a high content validity.<ref>{{cite book | last = Pennington | first = Donald | authorlink = Donald Pennington | title = Essential Personality | publisher = [[Edward Arnold (publisher)|Arnold]] | year = 2003 | doi = | isbn = 0-340-76118-0 | page = 37}}</ref>
| | Neurosurgeon Boster from Amherst, has interests which include fast, new launch property singapore and russian dolls collecting. Recommends that you simply visit Nahanni National Park.<br><br>Have a look at my homepage: [http://demo.commonsinabox.org/activity/p/24382/ Http://Demo.Commonsinabox.Org] |
| | |
| ==On content validity==
| |
| Content validity is different from [[face validity]], which refers not to what the test actually measures, but to what it superficially appears to measure. Face validity assesses whether the test "looks valid" to the examinees who take it, the administrative personnel who decide on its use, and other technically untrained observers. Content validity requires the use of recognized subject matter experts to evaluate whether test items assess defined content and more rigorous [[statistical test]]s than does the assessment of face validity. Content validity is most often addressed in academic and vocational testing, where test items need to reflect the knowledge actually required for a given topic area (e.g., history) or job skill (e.g., accounting). In clinical settings, content validity refers to the correspondence between test items and the symptom content of a syndrome.
| |
| | |
| One widely used method of measuring content validity was developed by C. H. Lawshe. It is essentially a method for gauging agreement among raters or judges regarding how essential a particular item is. Lawshe (1975) proposed that each of the subject matter expert raters (SMEs) on the judging panel respond to the following question for each item: "Is the skill or knowledge measured by this item 'essential,' 'useful, but not essential,' or 'not necessary' to the performance of the construct?" According to Lawshe, if more than half the panelists indicate that an item is essential, that item has at least some content validity. Greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential. Using these assumptions, Lawshe developed a formula termed the content validity ratio:
| |
| <math>CVR = (n_e - N/2)/(N/2)</math>
| |
| where <math>CVR=</math> content validity ratio, <math>n_e=</math> number of SME panelists indicating "essential", <math>N=</math> total number of SME panelists. This formula yields values which range from +1 to -1; positive values indicate that at least half the SMEs rated the item as essential. The mean CVR across items may be used as an indicator of overall test content validity.
| |
| | |
| Lawshe (1975) provided a table of critical values for the CVR by which a test evaluator could determine, for a pool of SMEs of a given size, the size of a calculated CVR necessary to exceed chance expectation. This table had been calculated for Lawshe by his friend, Lowell Schipper. Close examination of this published table revealed an anomaly. In Schipper's table, the critical value for the CVR increases monotonically from the case of 40 SMEs (minimum value = .29) to the case of 9 SMEs (minimum value = .78) only to unexpectedly drop at the case of 8 SMEs (minimum value = .75) before hitting its ceiling value at the case of 7 SMEs (minimum value = .99). Whether this departure from the table's otherwise monotonic progression was due to a calculation error on Schipper's part or an error in typing or type setting is unclear. Wilson, Pan, and Schumsky (2012), seeking to correct the error, found no explanation in Lawshe's writings nor any publications by Schipper describing how the table of critical values was computed. Wilson and colleagues determined that the Schipper values were close approximations to the normal approximation to the binomial distribution. By comparing Schipper's values to the newly calculated binomial values, they also found that Lawshe and Schipper had erroneously labeled their published table as representing a one-tailed test when in fact the values mirrored the binomial values for a two-tailed test. Wilson and colleagues published a recalculation of critical values for the content validity ratio providing critical values in unit steps at multiple alpha levels.
| |
| | |
| ==See also==
| |
| * [[Construct validity]]
| |
| * [[Criterion validity]]
| |
| * [[Test validity]]
| |
| * [[Validity (statistics)]]
| |
| | |
| ==References==
| |
| {{reflist}}
| |
| *Lawshe, C.H. (1975). A quantitative approach to content validity. ''Personnel Psychology, 28'', 563–575.
| |
| *Wilson, F.R., Pan, W., & Schumsky, D.A. (2012). Recalculation of the critical values for Lawshe’s content validity ratio. ''Measurement and Evaluation in Counseling and Development, 45'' (3), 197-210.
| |
| | |
| ==External links==
| |
| *[http://en.wikibooks.org/wiki/Handbook_of_Management_Scales Handbook of Management Scales], a Wikibook containing previously used multi-item scales to measure constructs in empirical management research literature. For many scales, content validity is discussed.
| |
| | |
| [[Category:Validity (statistics)]]
| |
Neurosurgeon Boster from Amherst, has interests which include fast, new launch property singapore and russian dolls collecting. Recommends that you simply visit Nahanni National Park.
Have a look at my homepage: Http://Demo.Commonsinabox.Org