[Statistics]Factor analysis – R

Purpose of Factor Analysis

There are two basic reasons if you consider using Factor Analysis:

  • Simplify a set of data by reducing a large number of measures for a set of respondents to a smaller manageable number of factors that still retain most of the information found in the original data set. In some cases, it also helps us to determine some variables/factors that cannot be measurable directly like intelligence for instance.
  • Identify the underlying structure of the data in which a large number of variables may really be measuring a small number of basic characteristics of our sample.

Basic Principles of Factor Analysis

Factor Analysis will group together those variables that are highly correlated. Then, from those groups we can select a variable that is representative of the common concept that factor purports to measure.

Result interpretation

  • Factor loadings: it presents the relationship between the observed variables and the newly produced factors. In case the are calculated from a data matrix of correlation coefficients, its value’s range is from -1.0 to +1.0.
  • Communalities: the percentage of total variance summarized by the common factors, h2. It is calculated by summing all squared factor loadings of a variable across all factors.
  • Eigenvalue: the sum of the squared factor loadings for each factor. The rule of thumb in choosing common factors is its eigenvalue is greater than 1.
  • Total variance summarized: the total original variance of all nine variables is represented by all factor, or in other words, sum of all communalities and then divided by number of variables.

We will group all variables into a few common factors and name these factors based on the general characteristics of variables that constitute them.

Reference from Marketing Research, 7th Edition of David J.Luck and Ronald S.Robin

So, now we will begin doing factor analysis in R with survey data about customer loyalty (loyalty.xlsx).

There are packages that we will use in this practice, let call them:

> library(readxl)

> library(corrplot)

> library(psych)

> library(GJPArotation)

Firstly, we import data from excel file into R. If there are NA observations, we also omit them.

> loyalty <- read_excel(“loyalty.xlsx”) #read data

> loyalty <- na.omit(loyalty) #exclude omitted observations

A correlation graph is necessary to get a sense of data structure.

> cor_matrix<-cor(loyalty, y = NULL, use = “pairwise.complete.obs”, method = “pearson”) #create a correlation matrix

> corrplot(cor_matrix, order=”hclust”,tl.col=”black”,tl.cex=.75) #plot the correlation matrix


Next, we will define the number of factors that we should select for factor analysis. But we must exclude STT, Gender, Age and Income variable from our data first.

> loyalty$STT<-NULL

> loyalty$Income<-NULL

> loyalty$Age<-NULL

> loyalty$Gender<-NULL

> loyalty$Age<-NULL

> parallel<-fa.parallel(loyalty,fm = ‘minres’, fa = ‘fa’)
Parallel analysis suggests that the number of factors =  6  and the number of components =  NA

As you can see the result, it suggests that the number of factors is 6 and here is its scree plot. The triangle symbols present the eigen value and the horizontal line cuts off the curve at eigen value = 1 (the threshold of rule of thumb). However, R recommend the number of triangle points are above the red line (not the horizontal line). By the way, it is OK if you want to choose the number of common factors between 3 and 6.


We do factor analysis with 6 factors as suggestion. There are two considerable options for rotation type: Varimax or Oblimin. How do we choose? If we believe that there are correlations between factors, Oblimin is our choice. However, in the opposite case, we are sure that these factors are uncorrelated completely, Varimax is the better choice. Promax is another option and it is similar to Oblimin with a bit difference.

If you want to do more with these factors after factoring, like do a regression, the option ‘score = “regression”‘ should be added.

> fit2<-fa(loyalty, nfactors = 6, rotate = “varimax”, scores = “regression”)

> fit3<-fa(loyalty, nfactors = 6, rotate = “oblimin”, scores = “regression”)

To get a better visibility look, we should cut off factor loadings are less than 0.4.

> print(fit2$loadings, cutoff = 0.4)

> print(fit3$loadings, cutoff = 0.4)

Another command to illustrate the combinations.

> fa.diagram(fit2)

> fa.diagram(fit3)

Left figure is for case of these common factors are uncorrelated and the right one presents the opposite case, all common factors correlate to each other.


Call common factors as variables and then combine them into a data frame for other analysis like regression.

> my.data<-cbind(loyalty,fit2$scores)

> setnames(my.data,old = c(“MR3″,”MR2″,”MR4″,”MR1″,”MR5″,”MR6”), new = c(“XD”,”LT”,”NL”,”NT”,”CK”,”TT”))

> regression<-lm(LT~XD+NL+NT+CK+TT, data = my.data)

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s