In Part 1, I discussed the importance of IT Organizational Clarity, the symptoms when clarity is compromised, and the challenges of trying to address those symptoms rather than the root causes that lead to compromised clarity. Part 1 closed with a discussion of the two key dimensions along which IT Organizational Clarity can be tackled – scope (units of IT Capability) and meaningful and assessable characteristics for evaluating and improving IT Capabilities.
In Part 2, I discussed ways to define IT Capabilities and provided guidelines on the manageable number of IT Capabilities and appropriate depth of decomposition.
In this final post in the IT Organizational Clarity series, I will discuss an IT Capability Assessment Framework.
An IT Capability Assessment Framework
- Purpose – the degree to which the goals, values, desired business outcomes and guiding principles are defined and understood by those contributing to or benefiting from the Capability.
- Commitment – the degree to which organizational relationships and commitment are demonstrated in terms of sponsorship and clearly defined accountabilities.
- Ability – the degree to which baseline processes, structure and competency requirements, enabling technologies and measurement approaches have been established.
- Accountability – the degree to which criteria for success and related performance requirements have been defined and communicated.
Note, that each of these dimensions comprise lower level assessment attributes.
Hierarchical Nature of Assessment Dimensions
Also note, there is a hierarchical nature to the four dimensions. Clarity of purpose for a given capability is needed to gain management commitment to that capability and to establish the organizational relationships – for example, between the capability, its providers and its customers. Management commitment, in turn, is needed to ensure the abilities are in place for the capability to perform. With clarity of purpose, management commitment, and the components of ability in place (i.e., defined processes; clear roles with clear competency requirements; appropriate tools and technologies; service providers organized, ready, and able to provide service; charging and cost allocation mechanisms) then accountability (performance management and consequence management) can be meaningfully defined and assessed.
The Mechanics of Capability Assessment
I have always found the assessment process to be as important, or even more important, than the results. Towards this end, I’ve adopted a couple of principles when helping clients with IT Capability Assessment:
- Follow a facilitated self-assessment approach i.e., get the right people in the room and facilitate them through the assessment process, so that the results are their results, and the insights as to what should be compared with what is are theirs to understand and ultimately, to act upon.
- Assess against an anticipated future state – “What is our capability to deliver against the IT vision and anticipated demand?” rather than “How are we doing today?” Ultimately, it is the future ability to deliver that matters, and people tend to be less defensive when they are assessing their ability to handle tomorrow’s needs as opposed to how they are doing today.
The other ‘mechanical’ question is the level of granularity at which to assess. There are two degrees of freedom, here. One is, the scope of capability. You can imagine, for example, assessing an entire IT capability as a single unit (not very helpful!) More realistically, assessing the 8-10 top level capabilities that comprise the entire IT capability. Or assessing the 8-10 sub-capabilities comprising any one of the 8-10 top level capabilities.
You can also adjust the level of assessment – for example, assessing against the four dimensions (Purpose, Commitment, Ability, Accountability), or assessing against the lower level attributes, such as Service Definition, Goals and Guiding Principles.
Figuring out the scope and level of assessment is part art, part science. In general, start at the highest reasonable level, then drill down as assessment findings indicate a need to drill deeper. I typically start with one assessment for the IT Value Chain capabilities (discover, deliver, sustain), and one for each of the Align/Govern and Enabling Capabilities – this usually means 7 or 8 assessment sessions. I typically allow 2 hours per assessment, and try to conduct the assessments with teams of 8 to 10 people, representing individuals from the capability being assessed and from related enabling or aligning capabilities.
Similarly, I usually start with the top level dimensions, but make sure that the assessment teams have access to descriptions and definitions of the lower level attributes. If we are unable to come quickly to consensus at the top dimension level, we will drill down to the underlying attributes. Experience with a relatively small number of assessments will give you a feel for appropriate scope and depth.
It’s Not About the Assessment – It’s About Identifying the Gaps to Close!
While the assessment process itself can be extremely enlightening, assessment itself is not the end – it is a means to the end. The key to an assessment is to gain clarity on where the biggest gaps are, in order to move from assessment to improvement.