ECM and Machine Learning – What are Box, IBM, OpenText and other Vendors doing?

There are many use cases in Enterprise Content Management (ECM) for which Machine Learning can be deployed. In fact, i’d argue that you can apply machine learning in all the stages of content life cycle. You can apply:

  • Supervised learning e.g, to automatically classify images, archive documents, delete files no longer required (and not likely required in future), classify records and many more
  • Unsupervised learning e.g, to tag audio and videos, improve your business processes (e.g., approve a credit limit based on a machine learning algorithm instead of fixed rules), bundle related documents using clustering and so on

What are ECM vendors currently offering?

Not much i’d say. These are still early days.

To be fair, Artificial Intelligence and Machine Learning have been used for a long time in enterprise applications but their usage has really been for really complicated scenarios such as enterprise search (e.g., for for proximity, sounds etc) or sentiment analysis of social media content. But it has never been easy to use machine learning for relatively simpler use cases. Additionally, no vendor provided any SDKs or APIs using which you could use machine learning on your own for your specific use cases.

But things are gradually changing and vendors are upping their game.

In particular, the “infrastructure” ECM vendors – IBM, Oracle, OpenText and Microsoft — all have AI and ML offerings that integrate with their ECM systems to varying degrees.

OpenText Magellan is OpenText’s AI + ML engine based on open source technologies such as Apache Spark (for data processing), Spark ML (for machine learning), Jupyter and Hadoop. Magellan is integrated with other OpenText products (including Content, Experience Suites and others) and offers some pre-integrated solutions. Specifically for ECM, you apply machine learning algorithms to find related documents, classify them, do content analysis and analyse patterns. You can of course create your own machine learning programs using Python, R or Scala.

Screen Shot 2018-01-24 at 5.54.13 PM

Figure: Predictive analytics using OpenText Magellan. Source: OpenText

IBM’s Watson and Microsoft Azure Machine Learning get integrated with several other enterprise applications and also have connectors for their own repositories (FileNet P8 and Office365).

Amongst the specialised ECM vendors, Box is going to make its offerings generally available this year.

Box introduced Box Skills in October 2017. It’s still in beta but appears promising. You can apply machine learning to images, audios and videos stored in Box to extract additional metadata, create transcripts (for audio and video files), use facial recognition to identify people and so on. In addition, you will also be able to integrate with external providers (e.g., IBM’s Watson) to create your own machine learning use cases with content stored in Box.

box ML

Figure: Automatic classification (tags) using image recognition in Box. Source: Box.com

Finally, there are some service providers such as Zaizi who provide machine learning solutions for specific products (Zaizi is an Alfresco partner).

Don’t wait for your vendors to start offering AI and ML

The rate at which content repositories are exploding, you will need to resort to automatic ways of classifying content and automating other aspects of content life cycle. It will soon be impossible to do all of that manually and Machine Learning provides a good alternative for those type of functionalities. If the ECM vendor provides AI/ML capabilities, that’s excellent because you not only need access to machine learning libraries but also need to integrate them with the underlying repository, security model and processes. An AI/ML engine that is pre-integrated will be hugely useful. But if your vendor doesn’t provide these capabilities yet, you still have alternatives. I’ve said this before and it applies to ECM as well:

There is no need to wait for your vendors to start offering additional AI/ML capabilities. Almost all programming languages provide APIs and libraries for all kinds of machine learning algorithms for clustering, classifications, predictions, regression, sentiment analysis and so on. The key point is that AI and ML have now evolved to a point where entry barriers are really low. You can start experimenting with simpler use cases and then graduate to more sophisticated use cases, once you are comfortable with basic ones.

If you would like more  information or advice, we’d be happy to help. Please feel free to fill the form below or email.

 

Machine Learning as an alternative to rule based processes

There’s a lot of discussion about machine learning these days and pretty much every one (vendors, users) is talking about it.

I remember attending courses on Artificial Intelligence, Machine Learning and even Artificial Neural Networks back in 1998. So what’s new?

How have AI and ML evolved?

I think a big reason why everyone is talking about machine learning now is that it it’s much simpler to use machine learning now for everyday, business use cases. Earlier, machine learning was mostly used for really complicated scenarios – think enterprise search (with advanced capabilities for proximity, sounds etc) or content analytics to do sentiment analysis. All these were useful but required expensive software and resources.

Not anymore. It’s become far easier to use machine learning for simpler problems. In fact, for lot of scenarios which required complex rules, you can actually use machine learning to take decisions. Let’s take an example. You are building a website that allows users to sell their old mobile phones. The website should be able to suggest a price based on a series of questions that a user answers. So you could have a set of rules that “rule-fy” each question.

For example:

Question 1: Phone model

If phone == A, Price = p

If phone == B, Price = q

Question 2: Age of phone

If phone == A, and bought within last year, price = P

If phone == A and bought more than one year ago but less than 2 years ago, price = 0.9 P

Question 3: Color

If phone == A, and bought within last year and color == black, price = P

If phone == A, and bought within last year and color == silver, price = 0.95 P

If phone == A and bought more than one year ago but less than 2 years ago, and color == black, price = 0.9 P

And so on. You can add more rules depending on questions about age, colour, defects, screen quality and so forth. And your rules become increasingly complex. And then what happens if a user wants to enter a value that the rule doesn’t handle?

Of course, in real life, you wouldn’t write rules like this. You will probably have a rules engine that that combines multiple rules and so forth but you get the idea.

Machine Learning as an alternative to Rules-based processing

Here’a how machine learning can replace a complex rules based application.

Let’s say you have historical data about phone sales. Yeah, I admit this is a big assumption but if you are creating rules and deciding prices, then you probably have some historical data anyways. So assume you have data such as this (this is just a sample; the more you have it, the better it is):

phone data

Fig: Second hand phone sales data

Now your original problem can be stated as a machine learning problem as follows:

How do you predict the price of a phone, that is not already there in the sample (or training set) above based on features and data available as part of training set?

Essentially, instead of you or your application making decisions based on pre-defined rules, you are now relying on your application to make decisions based on historical data. There are many techniques that can help you achieve this.

One relatively simpler technique is to use Linear regression. Linear regression is basically a statistical technique to predict an outcome (or dependent variable) based on one or more independent variables. Based on example above, you can describe Price P as a function of variables model, age, colour etc. Or in linear regression, it can be expressed as:

P = b0 + b1*model + b2*age + b3*colour + b4*condition…..

Machine learning algorithm then calculates values of b0, b1, b2 etc based on historical data and then you use this equation to predict price for an item that was not there in the training set. So if a new user now comes and offers a phone for sale on your site, you can recommend a price to her based on past sales.

Okay, that was a rather simplistic machine learning example and you can use many other more sophisticated techniques. For example, you can do a factor analysis or Principal Component Analysis (PCA) to reduce large number of items (e.g., news articles’ attributes) into smaller set of variables. Or use logistic regression instead of linear regression.. or whatever. The key point is that it is now much easier to use machine learning for everyday use cases without spending a lot on expensive software or resources. Pretty much all programing languages and development platforms have machine learning libraries or APIs that you can use to implement these algorithms.

The main drawback of using this approach (as in this example) is that the results might not always be as good as you would get with rules based technique. The quality of result is highly dependent on training set and as the training set improves (in terms of quality as well as quantity), the results would improve.

Are you using machine learning for your applications? If yes, what techniques are you using?