This is the 3rd in a series on assessing IT Capabilities.  (See Part 1 and Part 2)

A Quick Recap

Part 1 introduced some assessment principles I’ve found to be important.  Part 2 defined the term IT Capability, presented a potential landscape, or normative model, if you will, for IT Capabilities, and discussed ways to determine what IT Capabilities are needed.

In this part we will examine assessment dimensions, options and ratings.

Assessment Dimensions

Figure 1 – Capability Assessment Dimensions

Figure 1 shows a capability assessment approach that uses 4 top-level dimensions – Purpose, Commitment, Ability and Accountability. Each dimension is decomposed into lower (leaf) level dimensions.  Purpose, for example, is a function of how clearly and effectively the service(s) produced by a capability are defined, and how clearly the goals for that capability and principles by which the capability operates are defined.

Note, there is a hierarchy implied among the top-level dimensions.  It is unreasonable to expect management commitment to a given capability if the purpose and goals for that capability are unclear or inappropriate.  It is unlikely that appropriate ability is in place without the necessary management commitment.  It is unreasonable to expect clarity of accountability for a given capability if ability is lacking.   In practice, I don’t usually disclose this hierarchical relationship until the assessment is underway, when I do use it as a validation mechanism.  For example, if the assessment team is scoring Accountability as fully in place, when they’ve scored Purpose or Commitment or Ability as not in place or only partially in place, then I know I need to challenge the team, and probe for the inconsistency.

Assessment Options

The multi-level assessment dimensions provide several options for the assessment method:

  1. Assess a capability at the top-level, but use the leaf levels to clarify what is meant by the top-level.  For example, I can assess the degree to which the Purpose of a given capability is in place by thinking about the effectiveness and clarity of the Service Definition for that capability, and the quality of the Goals and Guiding Principles for that capability.
  2. Assess a capability at the leaf level dimensions.
  3. Mix and match between top-level and leaf level dimension based upon the needs (purpose and goals of the assessment) and feasibility (available time, available knowledge) of the assessment situation.

Assessment Ratings

For any capability, for each dimension, a rating can be assessed as follows – the capability dimension is:

  • Fully in place – this is our universal practice and will be found to be used consistently, with few, if any, exceptions.
  • Mostly in place – this is common practice, though is not universal or not consistent, or there are frequent exceptions.  We know how to do this well – but we need to get better in practice.
  • Partially in place – this is not common practice, though we have many of the necessary characteristics, but not all of them.  We have some work to do to strengthen the capability as it exists as common practice.
  • Not in place – we have few, if any, of the necessary characteristics.  We have a great deal of work to do to develop this capability.

Note, there is clearly room for interpretation in these ratings.  This is more art than science, and for most IT capabilities, we are not dealing with highly mature processes and statistical process control!  From my experience, that is ok, and is why in Part 1 of this series that I said that my preference is to use a facilitated self-assessment approach – pull together a team of practitioners, customers and subject matter experts and facilitate them through the assessment.  It is usually the dialog this generates that has the most value, and leads to the insight and commitment from the team to initiate and sustain improvement efforts.  More on this in Part 4.

Assessment Dimension Descriptions

Below are brief descriptions for each top-level and leaf level assessment dimension.


Are purposes and goals for the capability clearly defined and well understood?

  • Service/Product Definition
    • Is there a clear definition of the business purpose served by the capability and how this service/product integrates or links with other services and products?
    • Is there a clear definition of business outcomes for the service?
    • Have the service providers been identified? (e.g., dedicated internal service providers, external service providers, project team responsibility)
  • Goals and Guiding Principles
    • Are the appropriate opportunities to use the capability defined?
    • Are the goals established for use of the capability and/or its service providers?


Has organizational commitment been demonstrated in terms of senior sponsorship and management responsibilities?

  • Sponsorship
    • Is there adequate understanding, support and management commitment to sustain the use of the capabilities and/or services?
    • Is there commitment by example?
  • Delivery Management
    • Is qualified management in place to provide oversight and direction for delivering the capability or service?
    • Are mechanisms in place to ensure that service delivery will improve over time?
  • Change Management
    • Is a change management strategy and plan in place to overcome issues of organizational resistance?


Have the baseline processes, role requirements, enabling technologies and deployment capabilities been established?

  • Process Definition
    • Has a documented process been implemented to guide work activities?
  • Roles and Competencies
    • Are required competencies and specific areas of specialization defined?
    • Are appropriate service providers identified and trained?
    • Are training/skill development processes defined and provided?
  • Tools and Technologies
    • Are tools and technologies in place to enable effective execution of the work processes?
  • Deployment
    • Are the service providers organized, ready, and able to provide service?
    • Are charging and cost allocation mechanisms in place?


Have criteria for success and related consequences been defined?

  • Performance Management
    • Have measurable criteria for individual and group performance been defined?
    • Have measurable criteria for evaluating business results been defined?
    • Have necessary measurement/assessment approaches been instituted?
  • Consequence Management
    • Has a program to ensure usage been established?
    • Are the rewards for success and penalties for failure defined, communicated and implemented?
    • Have individual roles been defined including their inter-dependencies and how performance contributes to overall success?


Please join me for the 4th and final part in this series!  And please do weigh in with your own experiences, observations or questions!


Graphic courtesy of Tarun

Enhanced by Zemanta