What to Expect in the new version of CMMI® for DEV Version 1.3

The first day (17th August 2010)  of SEPG Asia-Pacific 2010 conference covered the changes expected to the CMMI® models, as a part of  release of V1.3. The tutorial was conducted by Mike Phillips of the SEI and was attended by a large group of professionals (mostly from the IT industry).

Here are the key points that I have gathered and my reactions to some of the changes in the DEV model. The detailed presentation can be downloaded from here [Thanks Mike :-)]

A summary of the changes are available in a presentation titled CMMI v1.3 – What’s New on slideshare.

Changes to the Generic Goals and Generic Practices (DEV Model)

1)    Generic goals 4 and 5 have been removed from the models. So, generic goals stop at GG3 for all process areas. [Reaction: Good. The material in GG4 and GG5 was very scanty, and could not be used to implement CL4 and 5 practices]

2)    No significant changes in the intent of generic practices, other than a few changes in the verbiage for a few generic practices (GP 2.6, 2.9 and 3.2) [Reaction: It would have been nice if some generic practices were merged, for example GP2.8 and GP2.10, to reduce the number of generic practices. Maybe in version 1.4 or later :-)]

3)    In the model book (or technical report), the generic goals and generic practices are described just once at the start of the document and are not repeated for each process area  [Reaction: Slimmer book to carry, less trees to be chopped, nice touch]

Changes to the Maturity Level 2 Process Areas (DEV Model)

1)         Requirements Management (REQM) has been shifted to the Project Management category of PAs [Reaction: Makes no difference, except that there are no engineering PAs at maturity level 2. Imagine a maturity level 2 development company saying “we have great management and support practices, but our engineering practices may not be….”]

2)         Supplier Agreement Management (SAM) has been simplified. Two contentious practices of SG2 (erstwhile SP2.2 & 2.3), have been converted to sub-practices of other specific practices [Reaction: These two practices were often a source of grief to many organizations in their appraisal. Fantastic!]

Changes to the Maturity Level 3 Process Areas (DEV Model)

1)         The optional IIPD addition (one goal in OPD and one goal in IPM) has now been converted into specific practices in OPD and IPM (one additional practice each) [Reaction: This is pity, because IPPD has a great value. I would have liked to see more emphasis on IPPD with greater clarity, instead of IPPD becoming 2 practices in the whole of CMMI®]

2)         No other changes, other than changes in the language to bring in more clarity.

Changes to High Maturity Process Areas (DEV Model)

1)         OID has been renamed as Organizational Performance Management (OPM). A new goal has been added to align process improvements to business objectives and process performance data. [Reaction: Was always required. Though the change looks big, most high maturity organizations would already be implementing the requirement of this new goal. However, the new name “Organizational Performance Management” is an overkill and misleading]

2)         Quantitative Project Management (QPM) has been made tighter and the requirements are more explicit. No significant change in the intent of the process area.

3)         Causal Analysis and Resolution (CAR) and Organizational Process Performance (OPP) have undergone some changes in the verbiage, though nothing significant in intent.

Version 1.3 of DEV (along with SVC and ACQ) will be released in November 2010.

SCAMPISM-A appraisals using version 1.2 of the model can be conducted for period of 12 months after the release of version 1.3. Organizations aiming for an appraisal in the later part of 2011 should consider switching to version 1.3 right away.

The SCAMPI-A methodology is also undergoing an upgrade. The SCAMPISM methodology upgrade will be released slightly later. So, organizations could use the current SCAMPISM-A version 1.2 to appraise organizations for CMMI® version 1.3 for some time.

Will try and post about SVC and the expected changes to the appraisal methodology sometime soon.

Also see: CMMI® version 1.3 Released


I am Rajesh Naik. I am an author, management consultant and trainer, helping IT and other tech companies improve their processes and performance. I also specialize in CMMI® (DEV and SVC), People CMM® and Balanced Scorecard. I am a CMMI Institute certified/ authorized Instructor and Lead Appraiser for CMMI® and People CMM®. I am available on LinkedIn and I will be glad to accept your invite. For more information please click here. To get email alerts for new posts, click here to subscribe.

What is (Project) Success in a High Maturity Organization?

Project success is measured by comparing the actual performance with what was budgeted, planned and committed – typically with respect to parameters of cost, schedule and quality. Projects that meet all parameters are considered completely successful, and those that meet some parameters are considered less successful. Projects that fail in most/ all parameters are labeled as failures. Of course, sophisticated systems may even use the extent to which they missed the objectives (near miss or missed by a mile/ kilometer) as a factor in determining the degree of success or failure.

Is this really how a high maturity (HM) organization (in terms of the CMMI® framework) should evaluate project success? I believe that the refinement in process and project management maturity should be used to fine-tune how we evaluate success.

A HM organization is “aware” that all processes have variations inherent in them. It “knows” that projects (that are composed of the processes) have a probability of achieving success in their objectives, but success is not guaranteed. The role of project management (esp. QPM) is to continually evaluate the probability of success and maximize the conditions to improve that probability.

When a single project goes through its life, those probabilities will play out. Which means that even if the probability of completing the project within its budget was 90%, a single project can overshoot the budget. Of course, if we run similar projects millions of times, only 10% of the projects will overshoot the budget; but we have only one project here. In such an “aware” organization, is the use of “actual budget compliance” a right way to measure success? If so, how is this organization different from a non-HM organization?

I believe that in a HM organization, project success should not be measured by after-the-fact results, but by the rigor and continual alignment of the project to maximize the probability of success. So, in a HM organization, a project is successful, if and only if:

*    The project, at start-up, consciously makes choices (composes the defined process, aligns plans) that maximize the probability of meeting its multiple objectives

*    The project continually evaluates the probability of meeting the objectives and revises its choices to maximize its probability of success

Now, in such an organization, the “best project” award may be given to a project which in the conventional sense has actually failed 🙂 – such an organization would be truly acting on the belief- “if we implement the process, the results will eventually follow”.

Your comments?


I am Rajesh Naik. I am an author, management consultant and trainer, helping IT and other tech companies improve their processes and performance. I also specialize in CMMI® (DEV and SVC), People CMM® and Balanced Scorecard. I am a CMMI Institute certified/ authorized Instructor and Lead Appraiser for CMMI® and People CMM®. I am available on LinkedIn and I will be glad to accept your invite. For more information please click here. To get email alerts for new posts, click here to subscribe.

What comes first – SPC or a stable process?

An interesting topic, which has been discussed very often. In every discussion, people agree on what is right and what needs to be implemented. But in actual implementation the principles are forgotten. Therefore it is good to re-align ourselves to the basics time and again.

What is often seen in actual implementation of SPC (ineffective and incorrect implementation):

1)    A process is documented and used

2)    Data related to the process is collected

3)    When we need to do sub-process control (because we are aiming for High Maturity rating), an SPC chart is prepared.

4)    Data which are outliers are thrown out (root cause analysis is not possible, because the outlier data belongs belongs to a distant past, and the causes are lost in the mists of time)

5)    Control limits are recalculated

6)    Steps 4) and 5) are repeated till all (remaining) points demonstrate process stability

7)    The SPC parameters (center line, UCL/ UNPL, LCL/ LNPL) are declared as baselines and used for sub-process control. The fact that the limits are too wide or that a lot of data points were thrown out (without changing anything in the process) is ignored.

What we have in the above scenario is a maturity level 2/ 3 organization using maturity level 4 tools. Usage of tools alone does not increase maturity. We cannot create a stable process through the use of SPC, we can only confirm the stability of the process through SPC and get signals when the process is out of control or shows changes in trends.

The More Effective Implementation of SPC:

1)    A process is documented and used. As the process is used, variations in the interpretation of the documented process are qualitatively analyzed. Actions are taken to augment the process definition, training and orientation till the interpretation and the qualitative understanding of the process is consistent.

2)    Process compliance audits (PPQA audits) on the implementation of the process identify more actions that need to be implemented to fine-tune the definition, training and orientation related to the process.

3)    Once the audits show consistent compliance, data related to the process performance are collected. Integrity of the data is checked and the data collection process is streamlined and consolidated- till the collected data demonstrates the required credibility

4)    Now we start looking at the data somewhat quantitatively (without using full SPC) – does the trend chart show stability? Is it showing too much dispersion/ variation? Based on the findings, the definition, training and orientation related to the process is refined further

5)    This is point we start using SPC charts to confirm process stability. Each inflection of instability is analyzed. Corrective and preventive actions are identified to further standardize the process, based on analysis of past instability. Once we are sure that causes of those inflections are removed, we can remove the points from the analysis.

6)    We are still left with points which show instability, and our CAR analysis tells us that some of the causes are truly extremely rare events. These are then removed from the data pool. Now all the remaining points are a part of the process. If the process still shows instability, then we can do further analysis – are these really part of a single process? Beneath the surface, are there two or more processes, and we need to separate out the data (e.g., the process may behave differently in the “performance appraisal season”? :-))

Having followed all the above steps, we now have a basis (and hence baseline) for an effective implementation of SPC.

Remember: We cannot create a stable process through the use of SPC, we can only confirm the stability of the process through SPC.


I am Rajesh Naik. I am an author, management consultant and trainer, helping IT and other tech companies improve their processes and performance. I also specialize in CMMI® (DEV and SVC), People CMM® and Balanced Scorecard. I am a CMMI Institute certified/ authorized Instructor and Lead Appraiser for CMMI® and People CMM®. I am available on LinkedIn and I will be glad to accept your invite. For more information please click here. To get email alerts for new posts, click here to subscribe.

Size Does Matter! (for baselines and sub-process control) -Continued

Let us take the example of  examination/ test centers, that run an exam throughout the year, every day. Past one-year data shows – 30% of the candidates pass the exam and 70% fail the exam, all over India.

The Bangalore test center handles around 1000 candidates per month, whereas the Mysore center handles around 100 per month. Over the last one year, both centers have shown the same 30 pass: 70 fail ratio.

For the month of June 2010, one center has reported 38% pass and another has reported 29% pass. Which center (Bangalore or Mysore) is more likely (has a higher probability) to have reported 38%?

Well, Mysore is more likely to have the higher deviation from the average (+8%) than Bangalore (-1%), because Mysore, handling lesser candidates, has a lesser number of opportunities to “average out”. An easy way to figure this out is to take the case of a center that handles only 1 candidate. This center can have either 0% or 100%  pass percentage; a -30% to +70% deviation from the average.

Let us now get back to the process performance baselines that we create and the way we do sub-process control. Here are some things that we need to keep in mind while creating, publishing and using baselines:

1) Baseline (mean and standard deviation) for a sub-process parameter (like coding productivity) will be different depending on whether we consider each the coding phase of each project as a data point, or we consider each program coded in each project as a data point. The standard deviation in the first case (large base) is likely to be smaller than the second case (small base).

2) When we publish performance baseline data, we need to qualify it with the level of detail at which it applies.

3) When we use the baseline data to do sub-process control, it needs to be applied to the same level of detail. So, to do sub-process control on program level coding productivity, we need to use the baseline that was created using programs as data points (not each project as a data point).

4) Baselines need to be created using similar situations of the base data. For example, we cannot combine the coding productivity on large programs with the productivity on small programs. Even if the average/ mean remains the same, the standard deviation will be higher when we take data from a smaller base as against a larger base.

The above points are not just “nits” but have an impact of the usefulness of baselines and sub-process control. Incorrect usage of baselines leads to incorrect displays of process instability / stability.


I am Rajesh Naik. I am an author, management consultant and trainer, helping IT and other tech companies improve their processes and performance. I also specialize in CMMI® (DEV and SVC), People CMM® and Balanced Scorecard. I am a CMMI Institute certified/ authorized Instructor and Lead Appraiser for CMMI® and People CMM®. I am available on LinkedIn and I will be glad to accept your invite. For more information please click here. To get email alerts for new posts, click here to subscribe.