Using Factor Analysis to reduce number of attributes

In my last post on using machine learning for everyday use cases, i’d mentioned factor analysis as a way to reduce large number of items (e.g., news articles’ attributes) into smaller set of variables. Some people asked me for examples of this, so this post is an attempt to explain how factor analysis can be used for what is known as dimensionality reduction.

Issues with large number of attributes

Let’s say you have a list of customers, and you want to analyze some aspect. It’s quite easy to analyse your list if they have a relatively small number of attributes – say 10. What if the number of attributes increases to 20? 100? Sure, manageable. What about 1000 or 10000 or more? or what about attributes that are not obvious (e.g., intention to watch a movie)?

Recall that in a typical machine learning algorithm, these attributes form the input matrix based on which you predict an outcome. So as the number of attributes increase, your algorithm will get computationally expensive plus difficult to program (and debug etc). There are additional issues of overfitting — meaning your machine learning model will fit your training set extremely well but still may not be able to predict that well.

One way to address this would be to group some of the related attributes together and run your algorithm based on that “grouped” attribute as input. Now in some cases, it’s easy to group some attributes because it would be obvious.

For example, let’s say you have attributes that describe a customer’s height and weight. Are they directly proportional to each other? Probably not. But are they correlated? Probably yes. But many of these correlations are not that obvious and there could be underlying patterns that are hidden.

Factor Analysis to reduce number of variables

Factor analysis is a technique to reduce the number of attributes when the relationships between those attributes are not that obvious. Essentially, Factor analysis analyzes interrelationships (or correlations) among a large number of items and reduces the large number of these items into smaller sets of factors. This smaller set of factors can then be used in further analysis — e.g., in logistics regression or neural network to predict your outcome.

Here is another concrete example. This study analysed how social media is used within organizations and came up with a list of 31 activities. These are examples of organisational processes which can benefit by use of social media. Of course, there could actually be many more activities depending on the scenario. The linked post has a chart that shows these activities. Now, if I had to do any analysis, it meant creating a model and analysing the impact on these 31 variables. A factor analysis (actually Principal Component Analysis to be precise) was carried out on these 31 variables and it grouped them into 8 variables. So for example, the factor analysis suggested that following variables from those 31 variables be grouped together:

smvaluechain

Fig: Multiple attributes grouped together by factor analysis

You will also probably agree that all these activities appear to be correlated as all of them relate to sales are marketing activities. So instead of analysing all these variables separately, you can thing of “Sales and Marketing” as one factor that encompasses all these 7 different activities (variables). Similarly, other groupings followed similar patten and I ended up with 8 high-level variables which in place of 31 variables.

Okay, so once you have a smaller, more manageable set of attributes, you can then use the grouped variables in your machine learning algorithms for further analyses. This will not only improve the performance but also result in better algorithms and improved predictions. In this study, i eventually used these 8 variables for further analysis using Confirmatory Factor Analysis and SEM. But more about that later.

Machine Learning as an alternative to rule based processes

There’s a lot of discussion about machine learning these days and pretty much every one (vendors, users) is talking about it.

I remember attending courses on Artificial Intelligence, Machine Learning and even Artificial Neural Networks back in 1998. So what’s new?

How have AI and ML evolved?

I think a big reason why everyone is talking about machine learning now is that it it’s much simpler to use machine learning now for everyday, business use cases. Earlier, machine learning was mostly used for really complicated scenarios – think enterprise search (with advanced capabilities for proximity, sounds etc) or content analytics to do sentiment analysis. All these were useful but required expensive software and resources.

Not anymore. It’s become far easier to use machine learning for simpler problems. In fact, for lot of scenarios which required complex rules, you can actually use machine learning to take decisions. Let’s take an example. You are building a website that allows users to sell their old mobile phones. The website should be able to suggest a price based on a series of questions that a user answers. So you could have a set of rules that “rule-fy” each question.

For example:

Question 1: Phone model

If phone == A, Price = p

If phone == B, Price = q

Question 2: Age of phone

If phone == A, and bought within last year, price = P

If phone == A and bought more than one year ago but less than 2 years ago, price = 0.9 P

Question 3: Color

If phone == A, and bought within last year and color == black, price = P

If phone == A, and bought within last year and color == silver, price = 0.95 P

If phone == A and bought more than one year ago but less than 2 years ago, and color == black, price = 0.9 P

And so on. You can add more rules depending on questions about age, colour, defects, screen quality and so forth. And your rules become increasingly complex. And then what happens if a user wants to enter a value that the rule doesn’t handle?

Of course, in real life, you wouldn’t write rules like this. You will probably have a rules engine that that combines multiple rules and so forth but you get the idea.

Machine Learning as an alternative to Rules-based processing

Here’a how machine learning can replace a complex rules based application.

Let’s say you have historical data about phone sales. Yeah, I admit this is a big assumption but if you are creating rules and deciding prices, then you probably have some historical data anyways. So assume you have data such as this (this is just a sample; the more you have it, the better it is):

phone data

Fig: Second hand phone sales data

Now your original problem can be stated as a machine learning problem as follows:

How do you predict the price of a phone, that is not already there in the sample (or training set) above based on features and data available as part of training set?

Essentially, instead of you or your application making decisions based on pre-defined rules, you are now relying on your application to make decisions based on historical data. There are many techniques that can help you achieve this.

One relatively simpler technique is to use Linear regression. Linear regression is basically a statistical technique to predict an outcome (or dependent variable) based on one or more independent variables. Based on example above, you can describe Price P as a function of variables model, age, colour etc. Or in linear regression, it can be expressed as:

P = b0 + b1*model + b2*age + b3*colour + b4*condition…..

Machine learning algorithm then calculates values of b0, b1, b2 etc based on historical data and then you use this equation to predict price for an item that was not there in the training set. So if a new user now comes and offers a phone for sale on your site, you can recommend a price to her based on past sales.

Okay, that was a rather simplistic machine learning example and you can use many other more sophisticated techniques. For example, you can do a factor analysis or Principal Component Analysis (PCA) to reduce large number of items (e.g., news articles’ attributes) into smaller set of variables. Or use logistic regression instead of linear regression.. or whatever. The key point is that it is now much easier to use machine learning for everyday use cases without spending a lot on expensive software or resources. Pretty much all programing languages and development platforms have machine learning libraries or APIs that you can use to implement these algorithms.

The main drawback of using this approach (as in this example) is that the results might not always be as good as you would get with rules based technique. The quality of result is highly dependent on training set and as the training set improves (in terms of quality as well as quantity), the results would improve.

Are you using machine learning for your applications? If yes, what techniques are you using?