EDITOR'S NOTES—ANALYZING INSTRUMENTS: THE NEED FOR FACTOR ANALYSIS
“An empirical judgment consists in our awareness that an empirical intuition we are having matches a certain concept.” Immanuel Kant
Several “Performance Improvement Quarterly” (PIQ) submissions have been rejected because of how survey instruments were developed, analyzed, or validated. In this Editorial, I hope to provide a quick guide for what is and is not accepted with survey instruments. This guide will focus on the central issues we have experienced with submissions to PIQ. However, these guidelines will not be a complete guide for conducting research using surveys or validating instruments.
No Theoretical Foundation Provided
One of the main issues we have experienced with submissions to PIQ is that the constructs combined into a comprehensive survey are not supported by a theoretical foundation. I often ask, “Why are these constructs used and combined?” and “What theory puts these specific constructs together to justify a combined survey instrument?”. In several studies that we have reviewed, not only in PIQ but also in other peer-reviewed journals, constructs are measured without a theoretical foundation highlighting why these constructs are relevant to be measured. This is problematic and gives the perception that researchers are trying to apply a survey they developed to fit their narrative or bias.
Without a theoretical foundation supporting a set of constructs being combined into a more extensive survey instrument, the study will be rejected. This importance of providing theoretical support for the constructs in a survey is highlighted by Bandalos and Finney (2010): when discussing rarer cases in which there is little theoretical support available (e.g., exploratory research, new fields of inquiry), “Some theory, however rudimentary, must have guided the selection of the variables and this theory should be explicated to the extent possible” (p. 95).
Scale Development
Several submissions identified that the researchers developed a survey but provided no information on how the survey was developed. If a survey is developed as part of a research study, the development of the individual items (questions) and the factors representing the theoretical foundation must be provided.
At a fundamental level, developing a scale must follow the following steps:
-
Determine clearly what it is you want to measure
-
Generate an item pool
-
Determine the format for measurement
-
Have initial item pool reviewed by experts
-
Consider inclusion of validation items
-
Administer items to a development sample
-
Evaluate the items
-
Optimize scale length (DeVellis, 2017, pp. 105–150)
When a new survey has been developed, the researchers should include the final survey items in the study’s appendix.
Cronbach’s Alpha Measures Are Inadequate as Standalone Measures of Reliability
Many submissions analyze surveys by calculating correlations and Cronbach’s alpha values between individual items (survey questions are instrument items). Basing the reliability of an instrument on correlations between items or on Cronbach’s alpha is incomplete. While these measures provide researchers with initial insight into an instrument, they represent only a precursor to the whole picture. An instrument cannot be reliable based only on the correlation matrix, variance matrix, or Cronbach’s alpha scores. Criticisms concerning Cronbach’s alpha as a measure of internal consistency are provided by DeVellis (2017):
-
Alpha relies on assumptions that are hardly ever met.
-
Violation of these assumptions causes alpha to inflate and attenuate its internal consistency estimations of a measure.
-
“Alpha if item deleted” [a common approach for assessing the impact of individual items on overall alpha] in a sample does not reflect the impact that item deletion has on population reliability.
-
A point estimation of alpha does not reflect the variability present in the estimation process, providing false confidence in the consistency of the administrations of a scale. (p. 54; see also Dunn et al., 2014)
No Factor Analysis Provided
Unless a mixed-methods research study is being conducted, in which surveys and observations utilize triangulation techniques to support the different analyses (quantitative, qualitative), all measured surveys need to be subjected to either an exploratory factor analysis (EFA) or a confirmatory factor analysis (CFA).
The main difference Between EFA and CFA is the instrument’s structure. When the instrument has not been tested in previous studies or when there is little evidence supporting the instrument, then an EFA is in order.
As a general guideline, EFA should be used for situations in which the variables to be analyzed are either newly developed or have not previously been analyzed together, or when the theoretical basis for the factor analysis model (i.e., number of factors, level of correlation among factors) is weak. (Bandalos & Finney, 2010, p. 96)
An EFA is also called for in exploratory studies. In contrast, a CFA is in order when an instrument has plenty of support from several research studies using different samples: “CFA should only be used if the structure of the variables has been previously studied using EFA with an independent source of data” (Bandalos & Finney, 2010, p. 96). The instrument is ill-structured in the former case, requiring an EFA, whereas the instrument is structured in the latter case, calling for a CFA.
Different Sample Required
One issue of concern is when the same data used for an EFA are used for the CFA. This is concerning and indicates that the researchers clearly do not understand what each is designed to perform. Two separate and independent samples should be used for an EFA and a CFA. One exception is if a large sample is collected. In this case, half of the data could be used for the EFA, and the second half could be used for the CFA. This requirement of independent samples is duplicated by Bandalos and Finney (2010) in the following: “Researchers should not conduct a CFA to ‘confirm’ the EFA solution using the same sample; this practice results in capitalization on chance due to fitting the idiosyncrasies of the sample data” (p. 106).
Results Can Only Be Generalized to the Same or a Similar Sample
Because EFA is more exploratory, inferences cannot be made. However, generalizations to the sample can be made. Additionally, it is possible to extend this generalization to other, similarly matching samples but not to different samples: “Results can only be generalized to samples similar to that on which the analyses have been conducted” (Bandalos & Finney, 2010, p. 97). This highlights the importance of reporting completely of whom your sample is comprised and how it is representative of the target population. Without presenting a representative sample, one that is representative of some larger population, generalizations cannot be made beyond the study sample.
For CFA studies, the same guidelines for EFA still apply. However, one difference is that CFA is an inferential method in which the study’s power must be evaluated. Power must be “computed for both individual parameter estimates and for the model as a whole” (Bandalos & Finney, 2010, p. 107).
Conclusion
When conducting a study involving a survey instrument, report how the instrument was generated. If developed as part of the study, the study should concentrate on following and reporting on EFA techniques, and this should be followed by appropriate CFA techniques, using a separate independent sample. While a CFA can be a complete study of its own to validate an instrument, an EFA typically is not comprehensive enough to be a standalone study. The exploratory portion with an EFA must be validated using CFA techniques. Although EFA and CFA are different in analysis and purpose, they often work hand-in-hand when validating survey instruments. Be transparent in reporting all steps to ensure a positive review when submitting your study, and follow previously published guidelines in doing so. For additional support, the references provided in this Editorial could provide a starting point for newer emerging scholars.
Support PIQ
One of the editors’ goals is the continued growth and advancement of the journal’s reach to various disciplines, industries, and markets. However, to accomplish this goal, the journal needs continued support from existing reviewers and the addition of new reviewers to the peer review team. If you are interested in participating in peer review for PIQ submissions, please create an account and sign up as a reviewer at https://mc.manuscriptcentral.com/piq.
The submission of new research from the performance improvement communities that meet the minimum requirements, as highlighted in previous editorials in this journal (Turner, 2018a, 2018b, 2018c, 2019a, 2019b), are encouraged. If you are interested in having your manuscript considered for publication in PIQ, present your research study after reviewing the minimal requirements highlighted in the previously mentioned editorials as well as review the author guidelines at https://ispi.org/page/PIQuarterly.
The editor and associate editors are here to help you with your publication. Do you have an idea for a research article and wonder if it is suitable for PIQ? Contact the editorial team for feedback. The editorial staff at PIQ works with submitting authors to move their articles toward publication. The editorial staff is active in the review process and continues to work with authors through rounds of revisions, if needed, to prepare their manuscripts for publication. If you have a performance improvement-related research article you would like to submit, please do so at https://mc.manuscriptcentral.com/piq. Be sure that the manuscript is related to performance improvement and meets the minimal guidelines presented in this and other editorials at PIQ.
Reviewers
Peer review is necessary for a journal’s success and reputation. We thank our current reviewers for their time and dedication to PIQ. We need continual support from our reviewers to grow the number of active reviewers for the journal. As mentioned in previous editorials, additional reviewers must provide critical and informative reviews for manuscripts in the publication pipeline and future submissions. If you are interested in becoming a reviewer, please contact any member of the editorial team: John Turner (john.turner@unt.edu), Rose Baker (rose.baker@unt.edu), or Hamett Brown (hamett.brown@usm.edu).


