ECM and Machine Learning – What are Box, IBM, OpenText and other Vendors doing?

There are many use cases in Enterprise Content Management (ECM) for which Machine Learning can be deployed. In fact, i’d argue that you can apply machine learning in all the stages of content life cycle. You can apply:

  • Supervised learning e.g, to automatically classify images, archive documents, delete files no longer required (and not likely required in future), classify records and many more
  • Unsupervised learning e.g, to tag audio and videos, improve your business processes (e.g., approve a credit limit based on a machine learning algorithm instead of fixed rules), bundle related documents using clustering and so on

What are ECM vendors currently offering?

Not much i’d say. These are still early days.

To be fair, Artificial Intelligence and Machine Learning have been used for a long time in enterprise applications but their usage has really been for really complicated scenarios such as enterprise search (e.g., for for proximity, sounds etc) or sentiment analysis of social media content. But it has never been easy to use machine learning for relatively simpler use cases. Additionally, no vendor provided any SDKs or APIs using which you could use machine learning on your own for your specific use cases.

But things are gradually changing and vendors are upping their game.

In particular, the “infrastructure” ECM vendors – IBM, Oracle, OpenText and Microsoft — all have AI and ML offerings that integrate with their ECM systems to varying degrees.

OpenText Magellan is OpenText’s AI + ML engine based on open source technologies such as Apache Spark (for data processing), Spark ML (for machine learning), Jupyter and Hadoop. Magellan is integrated with other OpenText products (including Content, Experience Suites and others) and offers some pre-integrated solutions. Specifically for ECM, you apply machine learning algorithms to find related documents, classify them, do content analysis and analyse patterns. You can of course create your own machine learning programs using Python, R or Scala.

Screen Shot 2018-01-24 at 5.54.13 PM

Figure: Predictive analytics using OpenText Magellan. Source: OpenText

IBM’s Watson and Microsoft Azure Machine Learning get integrated with several other enterprise applications and also have connectors for their own repositories (FileNet P8 and Office365).

Amongst the specialised ECM vendors, Box is going to make its offerings generally available this year.

Box introduced Box Skills in October 2017. It’s still in beta but appears promising. You can apply machine learning to images, audios and videos stored in Box to extract additional metadata, create transcripts (for audio and video files), use facial recognition to identify people and so on. In addition, you will also be able to integrate with external providers (e.g., IBM’s Watson) to create your own machine learning use cases with content stored in Box.

box ML

Figure: Automatic classification (tags) using image recognition in Box. Source: Box.com

Finally, there are some service providers such as Zaizi who provide machine learning solutions for specific products (Zaizi is an Alfresco partner).

Don’t wait for your vendors to start offering AI and ML

The rate at which content repositories are exploding, you will need to resort to automatic ways of classifying content and automating other aspects of content life cycle. It will soon be impossible to do all of that manually and Machine Learning provides a good alternative for those type of functionalities. If the ECM vendor provides AI/ML capabilities, that’s excellent because you not only need access to machine learning libraries but also need to integrate them with the underlying repository, security model and processes. An AI/ML engine that is pre-integrated will be hugely useful. But if your vendor doesn’t provide these capabilities yet, you still have alternatives. I’ve said this before and it applies to ECM as well:

There is no need to wait for your vendors to start offering additional AI/ML capabilities. Almost all programming languages provide APIs and libraries for all kinds of machine learning algorithms for clustering, classifications, predictions, regression, sentiment analysis and so on. The key point is that AI and ML have now evolved to a point where entry barriers are really low. You can start experimenting with simpler use cases and then graduate to more sophisticated use cases, once you are comfortable with basic ones.

If you would like more  information or advice, we’d be happy to help. Please feel free to fill the form below or email.

 

Machine Learning for Personalizing Digital Experiences

Personalization has always been a key aspect in almost all kinds of digital experiences. Some examples of commonly found personalisation use cases are: allowing users to customise their dashboards or user interfaces, showing content based on explicit user-defined criteria, showing content based on implicit criteria or even that based on user behaviour. All these required complex personalization systems, with processing and rules engines for creating and managing personalization rules. As a result, it has always been a non-trivial exercise to implement personalization in a resource effective way.

Artificial Intelligence (AI) and Machine Learning (ML) techniques have evolved and it is now much easier than ever to use these to implement personalization now. At a very simplistic level, personalization is about “predicting” what a user will like to see and then offering that to the user. You can make this prediction based on a complex hierarchy of rules or use historical data to make this prediction. The latter is exactly what machine learning based techniques can do for you.

Delivering Right Content to the Right People

Consider this common scenario: You want to show content that is relevant to the user. For example, let’s say you run an events site and want to show events that are relevant to the user. To do this, you could create multiple rules such as rules that match a user’s and event’s locations, or show events based on user interests and so on. This works great, with may be 5 rules. But consider a scenario where your users have 100s of profile and behavioural attributes and your events also have similar large number of attributes. So as you come up with more criteria, this rules based business becomes really messy and difficult to manage.

But with machine learning based techniques, you now have alternatives. Plus you no longer have to procure sophisticated personalisation systems. Instead, you can start writing very simple programs that can help you predict what kind of events a user would like to view depending on the events that other users with similar profiles viewed. You could use the same logic to display targeted news, movie recommendations or books. Some of these machine learning techniques are really simple and you can get started very easily.

Here’s another example for the same events web site. As an event organizer, you create a new event but are not sure what kind of pricing would work best. Again, if you think of this problem as a prediction problem, as in “predict price of new event given pricing of past events”, you could again use a simple prediction algorithm to recommend pricing based on pricing data for past events. Instead of events, you can use the same logic to price your new offerings or whatever. In addition, you can use this new data point as another input for your next prediction.

Start Small and Experiment

In addition to personalization, Digital Experience Management use cases can have several aspects for which you can start using machine learning. And there is no need to wait for your vendors to start offering additional AI/ML capabilities. Almost all programming languages provide APIs and libraries for all kinds of machine learning algorithms for clustering, classifications, predictions, regression, sentiment analysis and so on. The key point is that AI and ML have now evolved to a point where entry barriers are really low. You can start experimenting with simpler use cases and then graduate to more sophisticated use cases, once you are comfortable with basic ones.

If you would like more  information or advice, we’d be happy to help. Please feel free to fill the form below or email.

 

Using Factor Analysis to reduce number of attributes

In my last post on using machine learning for everyday use cases, i’d mentioned factor analysis as a way to reduce large number of items (e.g., news articles’ attributes) into smaller set of variables. Some people asked me for examples of this, so this post is an attempt to explain how factor analysis can be used for what is known as dimensionality reduction.

Issues with large number of attributes

Let’s say you have a list of customers, and you want to analyze some aspect. It’s quite easy to analyse your list if they have a relatively small number of attributes – say 10. What if the number of attributes increases to 20? 100? Sure, manageable. What about 1000 or 10000 or more? or what about attributes that are not obvious (e.g., intention to watch a movie)?

Recall that in a typical machine learning algorithm, these attributes form the input matrix based on which you predict an outcome. So as the number of attributes increase, your algorithm will get computationally expensive plus difficult to program (and debug etc). There are additional issues of overfitting — meaning your machine learning model will fit your training set extremely well but still may not be able to predict that well.

One way to address this would be to group some of the related attributes together and run your algorithm based on that “grouped” attribute as input. Now in some cases, it’s easy to group some attributes because it would be obvious.

For example, let’s say you have attributes that describe a customer’s height and weight. Are they directly proportional to each other? Probably not. But are they correlated? Probably yes. But many of these correlations are not that obvious and there could be underlying patterns that are hidden.

Factor Analysis to reduce number of variables

Factor analysis is a technique to reduce the number of attributes when the relationships between those attributes are not that obvious. Essentially, Factor analysis analyzes interrelationships (or correlations) among a large number of items and reduces the large number of these items into smaller sets of factors. This smaller set of factors can then be used in further analysis — e.g., in logistics regression or neural network to predict your outcome.

Here is another concrete example. This study analysed how social media is used within organizations and came up with a list of 31 activities. These are examples of organisational processes which can benefit by use of social media. Of course, there could actually be many more activities depending on the scenario. The linked post has a chart that shows these activities. Now, if I had to do any analysis, it meant creating a model and analysing the impact on these 31 variables. A factor analysis (actually Principal Component Analysis to be precise) was carried out on these 31 variables and it grouped them into 8 variables. So for example, the factor analysis suggested that following variables from those 31 variables be grouped together:

smvaluechain

Fig: Multiple attributes grouped together by factor analysis

You will also probably agree that all these activities appear to be correlated as all of them relate to sales are marketing activities. So instead of analysing all these variables separately, you can think of “Sales and Marketing” as one factor that encompasses all these 7 different activities (variables). Similarly, other groupings followed similar patten and I ended up with 8 high-level variables in place of 31 variables.

Okay, so once you have a smaller, more manageable set of attributes, you can then use the grouped variables in your machine learning algorithms for further analyses. This will not only improve the performance but also result in better algorithms and improved predictions. In this study, i eventually used these 8 variables for further analysis using Confirmatory Factor Analysis and SEM. But more about that later.

Machine Learning as an alternative to rule based processes

There’s a lot of discussion about machine learning these days and pretty much every one (vendors, users) is talking about it.

I remember attending courses on Artificial Intelligence, Machine Learning and even Artificial Neural Networks back in 1998. So what’s new?

How have AI and ML evolved?

I think a big reason why everyone is talking about machine learning now is that it it’s much simpler to use machine learning now for everyday, business use cases. Earlier, machine learning was mostly used for really complicated scenarios – think enterprise search (with advanced capabilities for proximity, sounds etc) or content analytics to do sentiment analysis. All these were useful but required expensive software and resources.

Not anymore. It’s become far easier to use machine learning for simpler problems. In fact, for lot of scenarios which required complex rules, you can actually use machine learning to take decisions. Let’s take an example. You are building a website that allows users to sell their old mobile phones. The website should be able to suggest a price based on a series of questions that a user answers. So you could have a set of rules that “rule-fy” each question.

For example:

Question 1: Phone model

If phone == A, Price = p

If phone == B, Price = q

Question 2: Age of phone

If phone == A, and bought within last year, price = P

If phone == A and bought more than one year ago but less than 2 years ago, price = 0.9 P

Question 3: Color

If phone == A, and bought within last year and color == black, price = P

If phone == A, and bought within last year and color == silver, price = 0.95 P

If phone == A and bought more than one year ago but less than 2 years ago, and color == black, price = 0.9 P

And so on. You can add more rules depending on questions about age, colour, defects, screen quality and so forth. And your rules become increasingly complex. And then what happens if a user wants to enter a value that the rule doesn’t handle?

Of course, in real life, you wouldn’t write rules like this. You will probably have a rules engine that that combines multiple rules and so forth but you get the idea.

Machine Learning as an alternative to Rules-based processing

Here’a how machine learning can replace a complex rules based application.

Let’s say you have historical data about phone sales. Yeah, I admit this is a big assumption but if you are creating rules and deciding prices, then you probably have some historical data anyways. So assume you have data such as this (this is just a sample; the more you have it, the better it is):

phone data

Fig: Second hand phone sales data

Now your original problem can be stated as a machine learning problem as follows:

How do you predict the price of a phone, that is not already there in the sample (or training set) above based on features and data available as part of training set?

Essentially, instead of you or your application making decisions based on pre-defined rules, you are now relying on your application to make decisions based on historical data. There are many techniques that can help you achieve this.

One relatively simpler technique is to use Linear regression. Linear regression is basically a statistical technique to predict an outcome (or dependent variable) based on one or more independent variables. Based on example above, you can describe Price P as a function of variables model, age, colour etc. Or in linear regression, it can be expressed as:

P = b0 + b1*model + b2*age + b3*colour + b4*condition…..

Machine learning algorithm then calculates values of b0, b1, b2 etc based on historical data and then you use this equation to predict price for an item that was not there in the training set. So if a new user now comes and offers a phone for sale on your site, you can recommend a price to her based on past sales.

Okay, that was a rather simplistic machine learning example and you can use many other more sophisticated techniques. For example, you can do a factor analysis or Principal Component Analysis (PCA) to reduce large number of items (e.g., news articles’ attributes) into smaller set of variables. Or use logistic regression instead of linear regression.. or whatever. The key point is that it is now much easier to use machine learning for everyday use cases without spending a lot on expensive software or resources. Pretty much all programing languages and development platforms have machine learning libraries or APIs that you can use to implement these algorithms.

The main drawback of using this approach (as in this example) is that the results might not always be as good as you would get with rules based technique. The quality of result is highly dependent on training set and as the training set improves (in terms of quality as well as quantity), the results would improve.

Are you using machine learning for your applications? If yes, what techniques are you using?