Traditional approaches for measuring performance focus on a classic model of identifying and collecting metrics and reporting them through summary reports or dashboards to management. Improvements are then measured as changes in these metrics. While this approach is better than failing to identify any performance measures, it typically falls significantly short in terms of measuring progress. In some cases, little thought and refinement went into which metrics are meaningful as well as what to do with the information collected. Dashboards may look cool, but often fail to convey meaningful insight.
A gauge by itself for example means nothing. Even real gauges in a factory are designed to start whistling or sound an alarm when a certain level is exceeded. But few corporate dashboards build such business rule equivalents into reporting mechanisms. Ultimately, most dashboards force their users to manage by looking at everything instead of managing by exception. In addition, dashboards and other performance management mechanisms often drive the wrong behavior. Employees, fearful of the repercussions of 'yellows' or 'reds' on a scorecard, are incented to mask issues from management until something more serious reveals the issues after it's sometimes too late to do anything material to mitigate it.
The collection of quantitative metrics is also expensive. The labor to collect or even automate collection can be enormous. For this reason, many successful efforts use the often better and less expensive approach of using a maturity model for assessing the performance of change initiatives. By quickly and somewhat subjectively identifying first what the existing capability is against a basic maturity model, an individual can then define what the organization or effort needs to accomplish to move it to the next level.
An additional benefit of the maturity model approach is that it is a significantly easier story to tell and understand. Occasionally, the desire for a more quantitative approach is driven by a desire by others to see the 'proof', the evidence that change is needed or that something works better. The reality is the collection of real quantitative metrics can be a rabbit hole from which otherwise valuable efforts are never able to climb out. For example, in many cases the real data is held by contractors who have neither the incentive nor the obligation to provide the data needed. With a maturity model approach, the case is often easier – ‘we have no automated processes, this entire function is managed on an ad hoc basis so let's just develop and use a basic, repeatable process and our ability to do that is the measure of progress.’
In an organization or function with a very low level of overall maturity, anecdotal assessments of performance based on broad strokes are acceptable. Once maturity reaches level three or four, the collection of quantitative metrics may then make sense to drive further incremental value.