Wednesday, May 18, 2011

Research bytes: Reliability paradox in SEM models and causal v effect indicator models




For the quantoid readers of IQs Corner. Italics emphasis added by the blog dictator.

Hancock, G. R., & Mueller, R. O. (2011). The Reliability Paradox in Assessing Structural Relations Within Covariance Structure Models. Educational and Psychological Measurement, 71(2), 306-324.

A two-step process is commonly used to evaluate data–model fit of latent variable path models, the first step addressing the measurement portion of the model and the second addressing the structural portion of the model. Unfortunately, even if the fit of the measurement portion of the model is perfect, the ability to assess the fit within the structural portion is affected by the quality of the factor–variable relations within the measurement model. The result is that models with poorer quality measurement appear to have better data–model fit, whereas models with better quality measurement appear to have worse data–model fit. The current article illustrates this phenomenon across different classes of fit indices, discusses related structural assessment problems resulting from issues of measurement quality, and endorses a supplemental modeling step evaluating the structural portion of the model in isolation from the measurement model.



Hardin, A. M., Chang, J. C. J., Fuller, M. A., & Torkzadeh, G. (2011). Formative Measurement and Academic Research: In Search of Measurement Theory. Educational and Psychological Measurement, 71(2), 281-305



The use of causal indicators to formatively measure latent constructs appears to be on the rise, despite what appears to be a troubling lack of consistency in their application. Scholars in any discipline are responsible not only for advancing theoretical knowledge in their domain of study but also for addressing methodological issues that threaten that advance. In that spirit, the current study traces causal indicators from their origins in causal modeling to their use in structural equation modeling today. Conclusions from this review suggest that unlike effect (reflective) indicators, whose application is based on classical test theory, today’s application of causal (formative) indicators is based on research demonstrating their practical application rather than on psychometric theory supporting their use. The authors suggest that this lack of theory has contributed to the confusion surrounding their implementation. Recent research has questioned the generalizability of formatively measured latent constructs. In the current study, the authors discuss how the use of fixed-weight composites may be one way to employ causal indicators so that they may be generalized to additional contexts. More specifically, they suggest the use of meta-analysis principles for identifying optimum causal indicator weights that can be used to generate fixed-weight composites. Finally, the authors explain how these fixed-weight composites can be implemented in both components-based and covariance-based statistical packages. Implications for the use of causal indicators in academic research are used to focus these discussions.


- iPost using BlogPress from my Kevin McGrew's iPad

No comments: