18  Dimension Reduction

In the previous section, we explored how multiple items can be used to measure a single underlying construct. Today, we will delve into three powerful techniques for reducing multiple items to a smaller number of variables: Principal Components Analysis (PCA), Exploratory Factor Analysis (EFA), and a little bit of Confirmatory Factor Analysis (CFA). Our focus will be on PCA and EFA, as they are particularly useful for understanding underlying structures in social science research. This section will primarily discuss Principal Components Analysis (PCA) and Exploratory Factor Analysis (EFA). These techniques help us explore relationships among items and identify latent constructs that explain the observed patterns in data. They serve as effective tools for dimensionality reduction, enabling us to summarize complex datasets with a smaller set of variables. While we do introduce CFA and reflect on its relationship to EFA, a more in-depth discussion of CFA is beyond the scope of this course.

Throughout this lecture, assume that we have k items and n participants. Let’s now dive into the details of different data reduction methods.

18.1 Principal Components Analysis (PCA)

PCA is a data rotation technique designed to transform original items into uncorrelated components. These components represent linear combinations of the original items. The primary goal of PCA is dimension reduction, where a small number of components are used to explain most of the variance in the items. This allows us to represent the variance in the items more efficiently. For instance, if ten items measure extraversion, and one component explains most of the variance, we can retain that one component and discard the remaining nine.

18.2 Exploratory Factor Analysis (EFA)

Unlike PCA, EFA is a latent variable method that assumes that latent variables (factors) cause people’s responses to the items. For example, extraversion may cause individuals to respond positively to questions about partying and socializing. EFA models the item covariance matrix as a function of a fixed number of factors. It is called “exploratory” because all items are allowed to load on (contribute to) all factors, without a predefined structure. In practice, well-constructed questionnaires will exhibit high loadings of items on one factor and low loadings on others.

18.3 Confirmatory Factor Analysis (CFA)

Confirmatory Factor Analysis (CFA) tests a theory about the specific associations between latent variables and observed indicators. Unlike Principal Components Analysis (PCA) and Exploratory Factor Analysis (EFA), which are exploratory, CFA is a confirmatory approach that tests how well a hypothesized measurement model fits the data. In CFA, researchers specify a theoretical model that defines the relationships between observed variables and latent constructs (factors). These latent constructs are not directly measured but are assumed to explain the correlations among the observed variables. The primary goal of CFA is to evaluate whether the data support the hypothesized model. By doing so, researchers can determine if their theoretical model fits the observed data well, providing evidence for the validity of the underlying construct and the measurement instrument. CFA is part of a family of statistical modeling techniques known as “Structural Equation Modeling” (SEM).

18.4 Comparing Method

Purpose:

  • PCA: Dimensionality reduction.
  • EFA: Exploration of relationships among items and identification of latent constructs.
  • CFA: Testing a predefined theory about which items relate to specific latent constructs.

Assumption:

  • PCA: Does not assume latent variables; dropping components assumes they are irrelevant or represent error variance.
  • EFA: Assumes all items are caused by a smaller number of latent variables (factors).
  • CFA: Assumes specific items are caused by specific latent variables.

Interpretation:

  • PCA: Components are mathematical constructs with no further meaning.
  • EFA: Factors represent theoretical latent constructs.
  • CFA: Factors represent known theoretical latent constructs.

18.5 Principal Components Analysis

PCA is a data rotation technique that aligns the largest amount of variance with the first component, the second-largest variance with the second component, and so on. These components are uncorrelated by definition, and they serve as linear combinations of the original items. The primary use of PCA is dimension reduction by retaining only components that explain a significant amount of variance, thus providing a lower-dimensional representation of the data.

We can understand PCA in different ways. Firstly, as rotation of the data. PCA rotates the data so that the first component best reproduces the correlation matrix, and each subsequent component improves the reproduction. Secondly, we can understand PCA as a way to summarize k items using fewer than k components, without significant information loss (lossy compression of data).

Selecting the Number of Components:

Various strategies exist to determine the number of components to retain, including Kaiser’s criterion (Eigenvalue > 1), Cattell’s scree plot (inflection point), and Horn’s Parallel Analysis (comparison with random data’s Eigenvalues). Additionally, theoretical knowledge about the underlying data can guide the choice of components.

Interpreting PCA Loadings:

Interpreting PCA loadings can be challenging, especially in cases where multiple components are correlated. Orthogonal rotation, such as Varimax, can be employed to simplify the pattern of loadings and improve interpretability. However, it is essential to remember that rotated loadings should not be directly interpreted as correlations between items and factors as in PCA.

18.6 Exploratory Factor Analysis

EFA is a model-based approach that assumes the existence of latent variables that cause item responses. It is suitable when you expect clusters of items to be correlated (multicollinear) and seeks to explain correlations between items. EFA assumes that unexplained variance in the items can be attributed to measurement error. This aligns with test theory, where it is assumed that observed items measure latent constructs with error. EFA is particularly suitable when there is a theoretical basis for assuming the existence of latent variables, such as when developing a new questionnaire that has not been validated yet. However, if a theoretical model already exists, Confirmatory Factor Analysis (CFA) may be more appropriate.

To conduct EFA, we estimate the unknown factor loadings. Two common estimation methods are Principal Axis Factoring (PAF) and Maximum Likelihood (ML). PAF is a default method in SPSS and is based on an iterative procedure involving matrix algebra. It provides a solution even when the model is complex or the data are non-normal. On the other hand, ML is the same estimator used for CFA and works well when the data are multivariate normal. However, ML may not perform well when the model is overly complex (which is not necessarily a bad thing). ML estimation also allows for a test of model fit, which is useful for evaluating the appropriateness of the chosen model.

Factor loadings represent the correlations between each item and the extracted factors. They indicate the strength and direction of the relationship between the observed item and the underlying factor. Factor loadings range from -1 to +1, with values closer to 1 indicating a stronger relationship. In our example, we can see the factor loadings in a factor matrix, where each row corresponds to an item and each column corresponds to a factor. The factor loadings help us identify which items load more strongly on specific factors.

We can compute Eigenvalues in EFA just as in PCA by taking the column sums of the squared loadings and indicate the amount of variance explained by each factor. Eigenvalues are always smaller than the initial eigenvalues obtained in Principal Component Analysis (PCA) because some variance is now attributed to error variance. Consequently, the sum of the Eigenvalues is also less than the number of indicators, and some Eigenvalues may even be negative.

Similarly, communalities in EFA are always < 1 because EFA assumes the existence of error variance.

18.6.1 Selecting the Number of Factors

Determining the appropriate number of factors to extract is a critical step in EFA. Researchers often use eigenvalues as a cue to determine the number of factors to extract, similar to the Kaiser’s criterion and Scree plot used in PCA - but note that in EFA, this can be misleading as Eigenvalues now depend on the number of extracted factors. Also, by default, SPSS applies Kaiser’s criterion and the Scree plot to PCA Eigenvalues, even if you request EFA!

An alternative criterion for determining the number of factors is using theoretical knowledge to guide the decision. For example, if emotions are believed to break down into positive and negative emotions, we may choose to extract two factors. Additionally, the chi-square test can be used to evaluate the appropriateness of different factor solutions and assist in selecting the best-fitting model. To directly compare models, one can compute the Bayesian Information Criterion (BIC) - a relative model fit index designed for comparing models, which balances model fit and complexity. It is computed from the chi square as follows:

\[ BIC = \chi^2 - df ∗ log(n) \]

18.7 EFA Assumption Checks

Before conducting exploratory factor analysis (EFA), it is good practice to perform several assumption checks to ensure the validity and appropriateness of the analysis. One critical aspect to consider is multicollinearity. While factor analysis aims to identify clusters of items that are correlated, excessive multicollinearity can lead to issues. This occurs when multiple items are perfectly linearly dependent, meaning that one item’s score can be exactly reproduced using other variables. In such cases, it becomes difficult to discern the unique contribution of collinear items to the underlying factor model. To detect multicollinearity, researchers can examine the determinant, a value between 0 and 1. It has been argued that the determinant should be greater than 0.00001, which indicates multicollinearity is not too high.

Another assumption check for EFA is the proportion of common variance among items. The Kaiser-Meyer-Olkin (KMO) statistic provides an estimate of this proportion. A higher KMO value indicates that more of the variance among items can be explained by common factors, making the data more suitable for factor analysis. Researchers can interpret the KMO value as follows:

Value Interpretation
0.00 to 0.49 unacceptable
0.50 to 0.59 miserable
0.60 to 0.69 mediocre
0.70 to 0.79 middling
0.80 to 0.89 meritorious
0.90 to 1.00 marvelous

18.8 Rotating Factor Loadings

In factor analysis, we aim to interpret the underlying structure of observed variables. The pattern of factor loadings is crucial in this process, helping us identify items that load highly on specific factors and potentially naming those factors based on high-loading indicators. In a perfect world, factor loadings would be clear and straightforward, with each item loading highly on only one factor. However, real-life factor loadings are not always so clear-cut, making interpretation more challenging.

To improve interpretability, we use rotation, which applies a linear transformation to the original factor loadings. Two main types of rotation are orthogonal and oblique rotation. Orthogonal rotation produces uncorrelated factors. The most common technique is VARIMAX rotation, which maximizes the variance of the squared loadings within each factor. Oblique rotation allows factors to correlate; the most common technique is oblimin rotation. In the social sciences, it is often sensible to allow factors to correlate (e.g., different personality dimensions are probably associated).

One-Factor EFA and One-Factor CFA:

Although this course is not about confirmatory factor analysis, it is nevertheless useful to know that a one-factor EFA model is identical to a one-factor CFA model. In other words, if our theory implies a one-factor model, we can use exploratory factor analysis (EFA) with maximum likelihood (ML) estimation to test that model. While EFA aims to identify underlying factors without any preconceived hypotheses about their association with items, CFA tests a hypothesized model - in this case, that one factor explains all item scores. CFA with ML estimation produces a chi-square test that can be used to assess model fit. Note, however, that this test can be sensitive to sample size and may reject good models. Researchers can also use the Root Mean Square Error of Approximation (RMSEA) as an alternative model fit index, where values below 0.08 indicate good fit. RMSEA is calculated from the chi square as:

\[ RMSEA = \frac{\sqrt{\chi^2 - df}}{\sqrt{(n - 1)*df}} \]

Treating a one-factor EFA as CFA also allows us to estimate latent variable reliability. Recall that Cronbach’s alpha assumes that all items are equally important. This means that it assumes that all factor loadings are the same. Factor analysis tests this assumption. Especially when factor loadings differ, it may be useful to compute latent variable reliability instead, using McDonald’s Omega (or composite reliability). It allows for different factor loadings, making it more appropriate for cases where items have varying contributions to the latent variable. The formula for McDonald’s Omega is:

\[ \omega = \frac{SSL}{SSL+SSR} = \frac{\text{Sum of Squared Loadings}}{SSL + \text{Sum of Squared Residuals}} \]

Calculate SSL as: \(SSL = (\sum_{j=0}^k L_{1,k})^2\) (first sum loadings, then square sum)

Calculate SSR as: \(SSR = 1-\sum_{j=0}^k L_{1,k}^2\) (first square loadings, then sum)

18.9 Estimating Factor Scores

In many cases, researchers want to conduct further analyses using idividuals’ scores on components or latent variables. In previous sections, we learned about two common methods for obtaining scale scores from multiple items: sum scores and mean scores. In sum scores, we add up the responses from each item to create a total score for each individual. Similarly, in mean scores, we take the average of the responses from all items to obtain a score. In both cases, all items contribute equally to the final scale score. However, this approach assumes that all items are equally important, which might not always be the case. PCA and EFA both allow us to determine whether items are indeed equally important. We can also try to compute scale scores that take differences in item loadings into account.

For PCA, computing such scores is straightforward; these are simply given by multiplying the loadings for one component with the observed item scores. Since this is not a latent variable technique, there is only one possible solution to this calculation. To compute a PCA score for a specific individual, we multiply their standardized item scores by the corresponding factor loadings and then sum the results. For instance, if an individual has standardized item scores of 1, 3, and 2 on items with factor loadings of 0.85, 0.80, and 0.14, respectively, their PCA score would be calculated as \((0.85 * 1 + 0.80 * 3 + 0.14 * 2) / (0.85^2 + 0.80^2 + 0.14^2) = 2.44\). This score represents the individual’s relative level on the component.

Estimating latent variable scores in exploratory factor analysis (EFA) is more complex compared to PCA. Unlike PCA, which provides unique factor scores for each individual, EFA does not uniquely determined factor scores. An infinite number of latent variable datasets is consistent with the same EFA model. To estimate factor scores, researchers use methods like the regression method and the Bartlett method. The regression method involves ordinary least squares estimates and aims to maximize the multiple correlation between factor scores and common factors. However, these estimates are biased and the estimated factor scores correlate with one another and with the different latent variables. The Bartlett method produces factor scores that only correlate with their own latent variable but still correlate with estimated scores for other factors. Both methods thus have shortcomings. Some (see references below) have argued that it might be preferable to simply use mean scores instead of factor scores. In cases where factor loadings are approximately equal, this is probably fine.

Further reading:

Everitt, B. S., & Howell, D. C. (2005). Encyclopedia of Statistics in Behavioral Science. DOI:10.1002/0470013192.bsa726 DiStefano, C., Zhu, M., & Mindrila, D. (2009). Understanding and Using Factor Scores: Considerations for the Applied Researcher. DOI:10.7275/da8t-4g52

19 Lecture

20 Formative Test

A formative test helps you assess your progress in the course, and helps you address any blind spots in your understanding of the material. If you get a question wrong, you will receive a hint on how to improve your understanding of the material.

Complete the formative test ideally after you’ve seen the lecture, but before the lecture meeting in which we can discuss any topics that need more attention

Question 1

What is the primary goal of Principal Components Analysis (PCA)?

Question 2

In Exploratory Factor Analysis (EFA), what is the role of latent variables?

Question 3

What distinguishes Confirmatory Factor Analysis (CFA) from PCA and EFA?

Question 4

How are components ordered in PCA?

Question 5

What is one way to understand PCA?

Question 6

What is the purpose of using orthogonal rotation, like Varimax, in PCA?

Question 7

What distinguishes Principal Axis Factoring (PAF) from Maximum Likelihood (ML) estimation?

Question 8

How are Eigenvalues computed differently in EFA compared to PCA?

Question 9

What indicates a problem with multicollinearity in Exploratory Factor Analysis (EFA)?

Question 10

Which of these is a method for estimating latent variable scores in EFA?

Question 1

The primary goal of PCA is dimension reduction, where a small number of components are used to explain most of the variance in the items.

Question 2

In EFA, latent variables (factors) are assumed to cause people’s responses to the items, unlike PCA where components represent linear combinations of original items.

Question 3

CFA is a confirmatory approach that tests how well a hypothesized measurement model fits the data, unlike PCA and EFA, which are exploratory.

Question 4

PCA aligns the data such that the largest amount of variance is with the first component, and each subsequent component accounts for the next largest variance.

Question 5

PCA can be understood as a way to summarize k items using fewer than k components, which can be seen as a form of lossy compression of data.

Question 6

Orthogonal rotation, such as Varimax, is employed in PCA to simplify the pattern of loadings, making them easier to interpret.

Question 7

PAF is an iterative method suitable for complex models or non-normal data, whereas ML, used for both EFA and CFA, is better for data that are multivariate normal.

Question 8

In EFA, Eigenvalues are always smaller than in PCA because some variance is attributed to error variance, leading to Eigenvalues that are less than the number of indicators and sometimes negative.

Question 9

In EFA, a determinant value lower than 0.00001 indicates excessive multicollinearity, making it difficult to discern the unique contribution of collinear items to the factor model.

Question 10

In EFA, factor scores can be estimated using methods like the regression method or the Bartlett method, each with its own advantages and shortcomings.

21 Tutorial

21.1 PCA

Open the data file: emotions.sav.

The data file consists of data from the International College Survey 2001 (Diener and colleagues, 2001). In this survey, data on emotions was collected for 41 countries. The data you’ll analyze in this assignment is about norms for experiencing/expressing 12 emotions in Belgium.

Let’s look at the data. The first two columns contain the number of the participant and the nation, so you don’t need to include them in the analysis.

True or false: There are missing data.

Suppose we are only interested in reducing the number of dimensions of the data, which method would you use?

Navigate to Analyze → Dimension Reduction → Factor in SPSS

In the tab Extraction: choose the correct method.

Also enable the option Scree plot and specify which variables need to be included in the analysis.

Check the Options tab. Can you determine what method is used to deal with missing data?

Paste the syntax and run the analysis.

Take a look at the output.

What number of component have an Eigenvalue greater than 1 (Kaiser’s criterion)?

How many components does the scree plot suggest?

Redo the analysis with the number of components you need to retain according to the scree plot.

You can specify the number of components in the Extraction menu of the Factor Analysis window.

Click Fixed number of factors and enter the number of components (2).

Run the analysis and look at the loadings in the Component matrix.

True or false: This solution is easy to interpret.

The two principal components seem to correspond with positive emotions (appropriate and valued), and negative emotions (inappropriate and not valued), but there is not enough simple structure (too many variables have a high loading on both components).

To aid interpretation, you could rotate the solution. Which type of rotation is most appropriate here?

It is unlikely that positive and negative emotions are uncorrelated! An oblique rotation seems by far the most sensible choice.

Regardless of your previous answer, redo the analysis and choose Direct Oblimin in the Rotation menu.

Take a look at the component loadings in the Pattern matrix.

Which component would you label Positive Emotions? Number..

Compare the component loadings in the Pattern Matrix with the loadings in the Component Matrix.

We now observe that the loadings resemble a simple structure more closely than before the rotation: the low loadings are lower and the high loadings are higher.

Note: Due to the oblique rotation, the loadings are no longer equal to item-component correlations.

What is the correlation between the two rotated components?

Redo the Principal Component Analysis again one last time to save the component scores in the data set. Open Scores in the Factor Analysis window, check the Save as variables checkbox. Have a look at these component scores (now added to your data set): these are the scores for each person on the two components.

Alternatively, add this syntax:

  /SAVE REG(ALL)

What is the component score for the first person on the first component?

Take a look at the table Total Variance Explained.

How much of the variance do the two components together account for? %

What proportion of the variance in the item stress is accounted for by the two components?

Which item has the highest unicity?

21.2 Exploratory Factor Analysis

We will move on to work with Exploratory Factor Analysis.

For this second assignment you will perform an Exploratory Factor Analysis (EFA) in SPSS on a set of 18 items. These items measure Tolerance and are part of the European Value Survey (EVS).

Discuss with your group when we decide to use Exploratory Factor Analysis and when we decide to use Principal Component Analysis.

PCA is a data reduction technique. We use it when we want to summarize information in the items.

EFA is used to identify latent variables underlying the measured items. EFA is typically used when a questionnaire has not been validated yet. When we use EFA, we usually do not know exactly which item belongs to which dimension (although we might have an idea based on our theory).

Discuss with your group: When do we use Confirmatory Factor Analysis?

CFA is used when we DO know which items belong to which dimension. With CFA we can then check whether the model that we have in mind corresponds with what we see in the data.

Open the file evs.sav in SPSS.

Select Factor via Analyze -> Dimension Reduction.

Which extraction method should we use if we want a test of model fit?

Drag all items of the tolerance scale (i.e., V225 - V2242) into the ‘items’ window. Go to Descriptives and select the options “Coefficients”, “Determinant”, and “KMO and Bartlett’s test of sphericity”. Then, go to extraction and select “unrotated factor solution” and “scree plot”. Paste and run the syntax.

What is the Determinant?

True or false: The determinant indicates that multicollinearity might be a problem for these data.

The factorability, as determined by the KMO index, is

How many factors would you want to select based on the scree plot?

How many factors would you want to select based on Kaiser’s criterion?

What are the limitations of using these criteria?

Both are based on eigenvalues computed for PCA, but you are performing EFA now.

Although you can also compute eigenvalues for EFA, SPSS doesn’t use those for the scree plot and Kaiser’s criterion - and moreover, eigenvalues for EFA depend on the number of extracted factors, which defeats the purpose of using them to determine how many factors to extract.

Furthermore, EFA is a theory-driven technique; it makes sense to use theory to determine how many factors to retain.

Assume that we’re extracting two factors for now. Re-do your analysis with the appropriate number of factors.

In the tab Extraction: choose the number of factors you want to extract.

In the tab Rotation: Tick the box Direct Oblimin.

In the tab Options: The interpretation of the pattern matrix is easier if you suppress all coefficients in that table that are small (e.g., values < 0.30). To do so, click on options and ask SPSS to suppress the small coefficients.

In the tab Descriptives: Ask for the reproduced matrix.

Paste and run the syntax.

When we interpret the output of the factor analysis, we inspect 4 tables: the pattern matrix, the communalities, the factor correlation matrix, and the reproduced correlation matrix.

We will start with the pattern matrix.

Inspect the factor loadings in the pattern matrix.

Which item has the highest absolute factor loading on Factor 2? Type the variable label from the table:

Decide for yourself: are the two factors clearly interpretable? Then check your answer.

The solution almost follows a simple structure where each item loads on one factor. Only for the item Having casual sex do we see high factor loadings on both factors.

Inspect the communalities table.

How much of the variance in the item “suicide” do the factors explain?

Check the correlations between the three factors.

How substantial is the correlation between the factors?

Inspect the residual correlations.

Which residual correlation is most concerning?

Take a look at the pattern matrix again.

Can you think of a meaningful label for each of the factors? (Take into consideration whether the loadings are positive or negative). Then check your answer.

There appears to be a distinction between legal and religious issues.

21.3 Exploratory Factor Analysis II

Open the dataset called “student_questionnaire.sav”.

It contains data on moral judgment in a variety of domains of social life (variables whose names start with MACJ). Note that you need the variable MACJ13_imputed, not MACJ13.

21.3.1 Model Selection using the BIC

When conducting EFA with ML estimation, we obtain a chi-square test of model fit that allows us to compute the BIC, a comparative fit index that can help us choose the number of factors that best balances model fit and complexity.

Run an EFA analysis for 1-3 and 7-9 factors. Using syntax can help you do this easily - just copy-paste the basic syntax below four times and change the number of classes:

FACTOR
  /VARIABLES MACJ1 MACJ2 MACJ3 MACJ4 MACJ5 MACJ6 MACJ7 MACJ8 MACJ9 MACJ10 MACJ11 MACJ12 MACJ13_imputed
    MACJ14 MACJ15 MACJ16 MACJ17 MACJ18 MACJ19 MACJ20 MACJ21
  /MISSING LISTWISE 
  /CRITERIA FACTORS(1) ITERATE(100)
  /EXTRACTION ML
  /ROTATION NOROTATE.

Open a spreadsheet in Excel or Google Sheets, and copy-paste the chi-square values and degrees of freedom into the first two columns. Obtain the number of (valid) observations using whatever procedure you want (for example, Descriptives).

Assuming that you have used the first two columns, paste the following formula into the fourth column. Replace “n” with the number of valid observations. Drag the formula down to copy it the all cells in its column:

= A1 - B1 * LOG(n)

What is the BIC for 3 factors?

Based on the BIC, out of the set of models compared, which number of factors would you choose?

True or false: This finding corresponds to the conclusion you would draw from the Scree plot.

True or false: This finding corresponds to the conclusion you would draw from Kaiser’s criterion.

True or false: KMO suggests that there is insufficient common variance for factor analysis.

True or false: The determinant suggests a potential problem with multicollinearity. This might be because there are so many similar items.

If I told you that the theory specified 7 factors, how many factors would you prefer? Explain why, then check your answer.

The BIC for 7 factors is almost identical to the one for 8 factors. If theory dictates 7 factors, you might prefer to stick with 7, as the evidence for 8 factors is not overwhelmingly stronger.

21.3.2 Latent Variable Reliability

Regardless of your previous answer, perform EFA with one factor. Recall that this is equivalent to performing CFA with one factor.

Cronbach’s alpha assumes that all items have equal factor loadings. Examine the factor loadings matrix.

True or false: it looks like the factor loadings are indeed all equivalent.

Compute Cronbach’s alpha for these items, and report the value:

Copy-paste the factor loadings into a spreadsheet.

Use the spreadsheet function =SUM() to sum the loadings, then square the sum to get the SSL, \(SSL = (\sum_{j=0}^k L_{1,k})^2\)

Create a new column with the squared factor loadings. Use the function =A1^2 (assuming that cell A1 contains your first factor loading). Then sum these squared loadings to get the SSR, \(SSR = 1-\sum_{j=0}^k L_{1,k}^2\).

Finally, calculate McDonald’s Omega:

\[ \omega = \frac{SSL}{SSL+SSR} \]

Report McDonald’s Omega:

Note that McDonald’s Omega is larger than Cronbach’s alpha. This is a rule; Cronbach’s alpha underestimates reliability compared to McDonald’s omega, and the underestimation becomes worse as the assumption of equal factor loadings is more violated.

21.3.3 Model Fit

Finally, calculate the RMSEA model fit index for this one-factor model. The cutoff for acceptable fit is RMSEA < .08.

\[ RMSEA = \frac{\sqrt{\chi^2 - df}}{\sqrt{(n - 1)*df}} \]

True or false: The one-factor model has acceptable fit.