Byrne, B.M., University of Ottawa, Canada
One of the most rigorous methodological approaches to testing for the validity of assessment instruments, and for their factorial equivalence across groups, is the use of confirmatory factor analysis within the framework of structural equation modeling (SEM). Despite ever-increasing growth in the use of this methodology over the past decade, several difficulties continue to plague researchers interested in testing for the construct validity of particular measuring instruments. Primary among these difficulties is the use of data that are (a) non-normally distributed, (b) categorically scaled, and (c) incomplete as a consequence of missing values. Although these types of data are common to psychological data in general, and to assessment data in particular, their use can seriously bias conclusions drawn from empirical study unless corrective steps are taken to address the problem. A fourth major difficulty common to the use of SEM in testing practice relates to findings of noninvariance derived from tests for the equivalence of a measuring instrument across culture. Specifically, the difficulty arises from the lack of an appropriate mechanism for pinpointing the extent to which such inequality may represent cultural biasing effects, rather than true score differential values. In presenting an update on current testing practice using SEM techniques, each of these methodological difficulties will be described, exemplified, and discussed.