Capability Maturity Model Integration (CMMI) and Me

A brief history

After getting a degree in English Literature and writing in 1968, I joined the exploding information revolution, training as programmer analyst.

My IT career spanned computer and OS evolution from early third-generation mainframes through virtual machines—from flat indexed files through hierarchical and relational database—through numerous generations of Moore’s law chip evolution in minicomputers, microcomputers, tablets, and smart phones—from LAN through WAN and VPN—from Arpanet to the Internet and web-based applications. I designed, led, and managed development of online data based information systems for organizations as diverse as Harvard University, AT&T, the Federal Reserve Bank, and Merrill Lynch. I also did groundbreaking work at those institutions in methodology, CASE, and software process.

At Merrill Lynch (ML) in the decades on either side of the millennium, my career intersected with the Software Engineering Institute’s Capability Maturity Model for Software. CMM-based software process improvement became my full-time job for the remainder of my corporate IT career. I had formal SEI training as an Assessment Team member and participated in a number of weeks-long projects and gap analyses to assess organizational process maturity in numerous ML/IT departments. I became an SEI-certified CMM Trainer and presented numerous overviews and formal classes. At my corporate home base, the Jacksonville Solution Center, I was a leader in that department’s rapid evolution to CMM Level 5, the model’s highest organizational maturity rating. We had quantitative project, process, and quality management, and a program for continuous process improvement in all process areas addressed by CMM-SW. Along the way, I also received formal training as a Six Sigma Black Belt, which enabled me to develop the quantitative analysis and management practices necessary for CMM levels 4 and 5.

The CMM was evolving, even as we worked to apply it. The base CMM had been applied to numerous disciplines, in models for systems engineering, software engineering, software acquisition, workforce practices, and integrated product and process development. A comprehensive model, aptly named CMM Integration (CMMI), was released and is now the de facto standard.

After my departure from ML, I “discovered” another discipline that benefits from CMM integration. I’ve had a second career in digital media production, operating a “boutique” studio. I quickly realized that CMM processes were applicable to studio work. Although there are obvious differences between system/software engineering and media production, there is some overlap in project planning and management, requirements development & management, product integration, verification, and validation. I have informally adapted specific practices to plan and manage media projects, to continuously improve my process for making, editing, and distributing digital media, and to manage an ever-growing archive of digital media and reusable project assets.

My IT career left me with two lasting legacies that are relatively independent of galloping information technology: a systems/process analysis skill set and understanding of CMMI—as a model—and its specific practices for software engineering. I can still analyze requirements and develop a structured model of an application (or business process) and I continue to experiment with, reflect upon, and write about the CMMI’s profound utility and application in a growing array of disciplines—including media production.

I plan to explore the model over a series of articles in this blog and maybe a book. I write to consolidate, enhance, and share my own experience and knowledge. I don’t presume to write for experienced CMMI “experts”. I write for IT executives, managers, analysts, designers, and engineers who may want or need to learn more about the subject and may enjoy “readily digestible” forays into discrete aspects of the model’s daunting complexity. I also write for managers and practitioners in other disciplines who may be curious to see what a capability maturity model might offer to their work.

If you are interested, I invite you to “follow” this blog and join me on this journey of exploration and discovery.

Why Do (Software) Projects Fail?

Seven ways a project (plan) can fail and foredoom a project

Image by Gerd Altmann from Pixabay

The primary goal of a project plan is to deliver the best tradeoff between reality and commitments—to satisfy as much of its requirements as possible while making and keeping realistic commitments given real constraints of time and money.

What’s a failed project? One that doesn’t meet its commitments for—

  • Products or services that perform as specified by the customer
  • On time
  • Within budget

This article specifically addresses software projects but the points made apply to any kind of work plan.

Why do projects fail?

Many—perhaps most—software projects that fail do not fail in software development.

Many projects fail before software development actually begins.

They fail because their plans are defective. In that sense, they are planned failures.

Failure in planning is planning to fail

Seven Ways a Project (Plan) Can Fail

  1. The project isn’t sufficiently estimated and scheduled
  2. The plan does not sufficiently address project risks and mitigation.
  3. The plan does not sufficiently address project resource requirements
  4. The plan does not sufficiently address timely acquisition of knowledge and skills
  5. The plan does not sufficiently address stakeholder involvement
  6. The plan proceeds without strong commitments by all stakeholders
  7. The plan is not maintained throughout the project life cycle

The keyword here, in case you didn’t notice, is sufficiently.

The project isn’t sufficiently scoped, estimated and scheduled

Project scope is the framework for the plan. The goal is to identify all stakeholders (see below), the business goals that will be addressed, the conceptual product architecture and the work to be performed.

The work plan is typically developed as a hierarchical work breakdown structure (WBS) that shows the tasks to be performed in each area of software engineering and support. It is (or should be) based upon the envisioned product architecture.

Mistakes often made here (in no particular order):

  • Inadequately detailed planning

You don’t want to wait until the end of the current phase (or cycle) to find out how well your estimates are tracking against actual results. Once you have sufficient size and complexity data from a phase, you should plan the next phase at the lowest practical level of detail—tied to specific work-product components (requirements, designs, code, test plans et al). Unfortunately, you can only do this by phase (or cycle for spiral or agile lifecycles).

This leads to the next possible mistake.

  • Prematurely detailed planning

Work plan details depend on and correspond to available product metadata.

You can’t plan detailed work for software product requirements analysis without actual requirements. With requirements in hand, you can scope the features and functions (however named) that will reviewed, clarified and modeled by the initial analysis phase of the project. You won’t have the analysis work products you need to do a detail plan for product design until analysis produces them.

As the saying goes, you don’t know what you don’t know.

Detailed planning is only indicated for the next phase or cycle of the project. Anything more is speculation not planning.

This is one reason project planning must be reiterated at project milestones to use newly developed work product metadata—specifically size and complexity. Until you can quantify how much work will be performed and its level of complexity, you can’t plan detail tasks to do the work.

  • Failure to develop estimates for work product size and complexity

Size matters!

This is one of the most common defects seen in software project planning. Aside from lines of code—and that only rarely—very little work product sizing is estimated. Every software work product in every phase—regardless of methodology—offers some basis for sizing and assessment of complexity. Work product size and complexity drive—or should drive estimates for work effort.

  • Confusion of work effort and work duration

Many projects estimate effort in terms of calendar time—days, weeks, or months.

Effort measures how much human effort it will take to do the work. Duration is how long the work will take.

It takes longer then one week for one full-time resource to expend forty hours of effort on a task.

Effort should be estimated; duration should be derived.

Effort should be estimated based upon size and complexity. Balancing effort with resource availability should yield the correct duration.

  • Failure to use objective models for effort estimation

As mentioned previously, work size and complexity should be the principle parameters for estimating effort. An estimating model should use historical or industrial data for comparable work whenever possible.

The plan does not sufficiently address project risks and mitigation.

Nothing is without risk. Project risks can include hurricane season, timely availability of resources, volatile and changing business requirements, and timely completion of related projects, among other thing. Virtually every aspect and section of the project plan may pose risks that require monitoring and mitigation.

Risks should be itemized, quantified, and prioritized for their potential impact, probability of occurrence, and timing. Work to monitor the risk and mitigate the impact should be planned and resources allocated on a contingency basis. If you don’t do this, you increase the risk that your project will be late and or over-budget.

The plan does not sufficiently address project resource requirements

You can’t develop a realistic project schedule without knowing two things: work effort estimates and available resources to do the work.

This process can founder on the myth that resources are available full time to do the work. Therefore, a forty-hour estimate for effort becomes one week on the schedule. Resources are never available full time. There are myriad ongoing overhead tasks and events—time and status reporting, production system problems, staff meetings, employee events et al—that interrupt time on task. For lack of anywhere else to report the time, team members report it against assigned project tasks and slippage ensues.

A realistic plan devises an algorithm for computing realistic availability metrics. Depending on the organization, real availability is likely to be in the 60-80% range.

The plan should also identify specific non-routine equipment and software resources along with the effort and time required to assimilate and integrate them into ongoing project work.

The plan does not sufficiently address timely acquisition of knowledge and skills

Typically, a project team is assigned because proven individual and team skill sets are a good match for a project. In the ever evolving and rapidly changing IT landscape that will not always be a given. Minimally, project planning should assess skill needs and skilled-resource availability and plan time and resources for closing any gaps.

The plan does not sufficiently address stakeholder involvement

Project stakeholders may include customers, users, other business and IT organizations, other systems, other projects, vendors, and the project team.

Strong stakeholder commitments are critical to project success.

The plan should document:

  • Stakeholders and specific commitments
  • Specific roles and responsibilities, tied in to specific scheduled events in the plan
  • Time, effort, and resources needed to ensure timely stakeholder involvement

The plan proceeds without strong commitments by all stakeholders

Without formally documented organizational commitments, authorized by appropriate levels of management, a project can flounder and founder through no fault of the project team.

The plan is not maintained throughout the project lifecycle

Project planning is too often regarded as a preliminary front-end to the project lifecycle. A dynamic instrument, the plan changes on a daily and weekly basis based upon actual work performed and actual events—planned or unplanned. That assumes ongoing monitoring and timely feedback on every critical component that has been cited in this post.

If you estimate it, you need to track it and revise your estimates as needed based upon actual results.

The planning process should be reiterated at prescribed milestones in the project lifecycle, at completion of phases in a traditional waterfall lifecycle, or thread-completion in a multi-threaded waterfall cycle, or completion of a cycle in a spiral or agile lifecycle. At each point, there is new and expanded software product metadata and verification data that will enable a greater level of detail in the next phase or cycle of development.

There is much overlap between project planning and project monitoring & control and project management encompasses both processes throughout the life cycle of the project.

A shorthand way of saying all this is that a project can fail because there is no appropriate planning process. An appropriate planning process anticipates and satisfies all the potential causes of failure cited above.