My CMS does delivery too!

Content goes through three four distinct stages in its lifecycle – Creation, Management and finally Delivery Delivery and finally Archival. There are many products out there that claim to do delivery as well as content management and it becomes important to understand how they deliver content . Some products are good at management, others are good at delivery and some can do both.

I’ve been thinking of this topic for a while and will probably write a detailed whitepaper (that hopefully someone will publish :)). Here are some of my thoughts:

In my opinion, architectures for content delivery can be broadly classified into two types:

  • Loosely coupled delivery
  • Tightly coupled delivery

In a loosely coupled architecture, the CMS and delivery applications are generally separate applications, with different repositories. Content managed by the CMS is published to another repository from where it is picked up by the delivery application. The delivery application is generally not aware of CMS.

In a tightly coupled architecture, the delivery is either done by the CMS itself or there is much tighter integration.

Loosely coupled delivery

This is preferred by organizations who want to use best of breed applications for different aspects of managing the content lifecycle. In this, there is a very thin layer of integration between the two environments. The environments are either decoupled or at best very loosely coupled. If the environments are completely decoupled, the content management system’s responsibility ends after it publishes content. This can be to a file system (as static htmls or XMLs) or to a database. The presentation layer is then written in the delivery application that picks this published content and presents it to users.

Interwoven TeamSite is a good example of this approach. Content is created and managed by TeamSite and converted to html (or XML files). This content is then deployed using OpenDeploy to an application server’s file system and the associated metadata is published to a database using Database or home grown scripts. A J2EE presentation layer written on an application server (ATG, BEA or similar) queries this database and includes appropriate files to display to the users. There are different possibilities of achieving this, depending on the choice of delivery environment but the idea is similar.


  1. There is division of labour. Each system does what it is best at and hence best of breed products can be used.
  2. Existing investments are protected. So if an organization has invested in an application server, they can reuse the same infrastructure.
  3. In general, requirements of a CMS and Delivery, in terms of infrastructure resources, performance and availability are very different. Hence this model becomes quite useful.
  4. Different best of breed applications can be used for doing multi channel delivery.


  1. The two environments are generally disparate. This usually means a different file system for the two, different repositories for content and users and so on. So features like in-context editing where users can make changes from within the context of end user application will not be there.
  2. Presentation layer is handled by a different application. So content authors, generally cannot preview content as it would appear on the website.
  3. If changes are done on delivery environment, it is generally not possible to bring them back into the CMS. This could be important in cased where user submitted content needs to go through a workflow.
  4. Content expiry needs to be handled very carefully because content that expires on the website needs to be reflected in the CMS also.
  5. Different skills sets for development and maintenance, different vendors to manage etc

Tightly coupled delivery

In this, usually, the same application does an end to end management of content lifecycle. There could be separate instance for management and delivery but essentially the applications are same. Even if there are different products being used for content management and delivery, the integration is much tighter.

Fatwire and Vignette are good examples of this approach. In Fatwire, for instance, content is created and managed within Content Server. The content is published (either dynamically or statically) to another environment that also runs Fatwire Content Server. Templates are then written within Fatwire to deliver personalized content.


  1. It is easier to manage because the same product is used for end to end. So, in terms of resources, support and integration issues, it is less painful.
  2. The delivery and management systems are better synchronized. So changes in one can be easily propagated to the other.
  3. Replication, backup and recovery are generally easier.
  4. It is easier for the content folks to visualize and edit content in the context of end use website.


  1. Some of the features in best of breed applications might not be present in this approach.
  2. The product itself might not be good at doing all things and hence some compromises might be required.
  3. Licensing cost could be prohibitive if the product needs to be there on both management and delivery environments.

8 thoughts on “My CMS does delivery too!”

  1. Thanks Alan. I’m updating the post with that. The reason i left it out was that the focus of this post was on different delivery architectures and how the Management and Delivery applications integrate. Content Archival is definitely an important aspect of the CM Lifecycle.

  2. Shouldn’t it be the other way round: what has expired in the CMS should also be expired in the destination production web-site?

    IMHO, it is usually the technical side who are more comfortable with loosely-coupled delivery because it fits in well with the usual apps development workflow i.e. develop-SIT-UAT-production, and it also fits in nicely with the existing defense-in-depth network and security architecture.

    Usually, it is the end-users who prefer tightly-coupled delivery. They usually need to publish things in a hurry, and consider all the “testing” and publishing steps unnecessary bureaucracy.

    So, the real question of which to choose is really: do you want to give your end-users enough rope to hang? 🙂

  3. Thanks – I ageee that *usually* it is the other way round: what has expired in the CMS should also be expired in the destination production web-site but there are cases when people have to directly make changes on production environment. Also, there are legal issues sometimes because of which content expires on production. In such cases, these changes have to reflect in CMS as well.

  4. Changes in production first? my gawd, I can’t think of many situations where that is warranted except for one: during disaster recovery. Even then, one should make it part of the procedure to re-sync the CMS to the production again.

  5. oh yes you would be surprised how many people want that ability 🙂
    apart from that, there are genuine requirements also – for example, a site that accepts user generated content but before that content actually appears, it has to go thru CMS workflow.

  6. Thanks, Apoorv – I learnt something new. That scenario never occured to me. This would be an interesting topic for another posting next time.

    Methinks this can still be resolved through careful architectural planning: perhaps an intermediate server that pulls and does data cleansing, then fresh submission through the SIT-UAT-production cycle. But then, I’m a bit of a purist on this. 😛

  7. Hi,

    I feel both approaches whether loosely coupled or tightly coupled try to address various challenges of ECM. I would like to look at it from the perspective of having generic and well defined interfaces for interaction between various pieces of ECM services like versioning, Workflow, search, categorization, publishing, collaboration and delivery. In that case it may become transparent whether ECM system is based on loosely coupled or tightly coupled architecture.It will provide benefits of both approaches and customers can choose any approach based on their requirements. JSR 170 is a step in this direction which is taking care of core repository services like create/modify/view/version/observe content.

Comments are closed.

If you would like to get short takes directly in your mailbox, please do consider subscribing to my newsletter. I won’t spam you and your information will be safe. I usually send it like once a week (or once in 15 days).

%d bloggers like this: