Check out more about DF Tims and his work:
Systems, Process, and the English Major
In 1968, armed with a seemingly “useless” English degree, I was swept up and away into communication, computer, and information revolutions that since have rocked the world, and are still rocking it at full throttle. In “the Revolution (however known), I served as foot soldier, as mercenary, and led troops into battle as a program director, project manager, and team leader. This is how it happened.
By 1968, demand had spiked for people who could program and manage computer systems and telecom networks, information technology (IT) industries so new that there wasn’t yet an education infrastructure to supply them with trained workers. IT organizations had to grow their own first generation of programmer analysts.
Supply and demand intersected this story at a Bell telephone company that trained this “English major” (EM) first as a network manager and then as a computer programmer analyst. The EM learned a new language—assembly code—a cryptic 2nd generation language for programming third generation computers. Those huge amazing machines could do (or seem to do) multiple things at the same time. They read and wrote large volumes of data on magnetic tape drives as big as refrigerators and disk drives the size of washing machines. He wrote his first programs on cardboard punched cards—one card per instruction.
Within five years–the English major thought things had come nearly full circle. Now he typed code on a TV-like display terminal, writing in a 3rd generation “procedural” language—COBOL—that was essentially structured English. Another computer program translated “high level” Cobol into the “assembly code” he’d first learned to write so that the computer could execute it. That was a programming language revolution—from machine code to English—and a system/IT revolution—software compiled translation from English into machine code. These were early fruits of “the Revolution” that was changing—everything.
Over the next seven years, the English Major surfed the revolutionary waves across various emerging technologies—timeshare networks (computer time not condos), parallel programming with multi-tasking, and minicomputers. He ended the 70s with a sabbatical year and an extended meditation retreat to an ashram in India. “On the cushion,” he experienced new insights—that “everything is a verb”—a process; that every system is an attempt to control some process; that every process (or system) is actually a sub-process—and that includes computer programming, himself, and his mind. In fact, his mind seemed to have a lot in common with software.
That sabbatical was excellent priming for two simultaneous appointments—one full time as a lead analyst at Harvard University’s Office of Information Technology and one part-time as course designer and instructor at a post-graduate systems training school.
Mornings, he would try to teach liberal arts graduates (EM newbies) to think like computers in structured, logical, and relational terms.
The rest of the day, across the river at Harvard, he researched and prototyped new fruits of the Revolution: microcomputer systems, relational databases, structured development methodologies, 4th generation—non-procedural—programming languages, and expert systems—a step toward AI. He worked on a “virtual machine” in a networked system environment that extended off campus into something called The Advanced Research Projects Agency Network (ARPANET) that turned out to be a precursor to our Internet.
At night, as he planned the next morning’s lectures, the EM would reflect and ruminate upon the explosive progress of the new technologies that were inundating the culture—and his own mind. He saw human software process lagging far behind technology. He tried to render all this in relevant terms and concepts he could use to teach. One student observed, “You’re not just teaching us to program computers; you’re teaching us how to think—about everything!” He saw the graphs of revolutionary change accelerating, converging and veering toward the vertical and told his students (in 1983) that he could almost imagine how things might change until some time in the 90s. Beyond that, in the new millennium, he said, “It’ll be science-fiction.” He was right about that and here we are “in that future”—supercomputers, pocket computers, smart phones, and the World Wide Web.
Bursting with ideas, the English Major set out as a systems samurai consultant for ten years to apply some of what he’d learned at Harvard and in teaching. He used his 4th generation programming and relational database expertise to lead RAD (rapid application development) projects. The projects included systems that assisted AT&T through divestiture of its Bell-system companies; a metadata-driven COBOL code generator; a computer-assisted budgeting process for a Federal Reserve Bank; and a system for translating and migrating 4th generation programming code between Unix and IBM VM operating environments. The unifying theme in his work was visualizing software development as a process and automating or at least computer-assisting that process.
Software process and software process improvement was his sole focus in the final decade of the English Major’s IT career. He let technology—the “what” of IT—rocket past and beyond him while he focused on the “how.” He hadn’t given up on computer-generated software but realized its ultimate fulfillment lay with rapidly evolving AI.
Meanwhile, he saw an IT industry beset and burdened with development and support of an ever-growing inventory of human-programmed systems. He became intrigued with software process as a means to “program the programmers and software managers.”
The English Major became an expert in the Software Engineering Institute’s Capability Maturity Model (CMMI) and the Six Sigma Black Belt process for quantitative process-improvement. He spearheaded one IT department within his global-financial-services employer in its rapid evolution to CMM Level 5, the highest capability maturity rating—the first American group in the finance sector to earn that distinction. His “reward” was the disbandment of that elite department and his own early retirement—and that’s a story for another post.
Dispirited and unwilling to return to corporate IT trench warfare, the English Major pursued a second career in writing and video production, where he still employs some “software” process best practices to improve productivity and quality assurance in digital media management and production. A process is a process. Making a movie—even developing a screenplay— share much common process with developing a software product—they are all Intellectual Properties (IP)—and they all benefit from some of the same best practices.
He’s never stopped thinking and journaling about systems and process, not the technology so much as the social science, psychology, and philosophy—and the diverse opportunities and potential benefits of “process thinking” both inside and outside the IT industry. He still reflects and ruminates on systems and process at night, and still dreams about adventures and misadventures in his IT career.
Now, the English Major has embarked on a third career—“coming home” to write about what he’s learned—as a blogger, copywriter, and author. He intends to focus on the human side of the software process more than the technology. The English Major plans to keep it simple and hopes to entertain, inform, and even inspire readers within and outside IT.
You may be a business executive.
Your organization has business needs that are (or will be) allocated to software. You want to get those needs satisfied as quickly and economically as possible.
You may be an IT executive.
You want your development team to develop the best possible software solution to your customer’s needs—on time and within budget.
You may be a business analyst or software engineer.
You want the software solution to completely and accurately satisfy all its allocated business requirements.
One artifact addresses and supports all these needs—effective software requirements
· Complete enough to estimate development scope and complexity
· Specific and itemized enough to enable both development and traceability
· Accurate and unambiguous enough to enable mutual understanding and verification by all stakeholders (including software developers)
· A production baseline for alternate solutions and future enhancements
Effective requirement specifications should enable
· Software product and project management & development
· Two-way traceability — where is a specific requirement implemented? Which requirements map to this work product?
· Verification — does a specific software product satisfy its requirements?
· Change management — if a specific requirement changes, what’s the impact, what rework must happen?
The most typical and least effective form of requirements specification is narrative-text captured in a more or less structured document.
Narrative text fosters inconsistent naming and references, and ambiguity. Aside from oral transmission, narrative text is the least effective form of requirement specification and does little to enable effective verification, management, and development.
The most effective form of requirements specification is a requirements model.
A model can be a document with diagrams and tables in place of structured narrative. A set of computer-assisted analysis tools could capture and relate the same information.
Unlike a narrative-text document, a requirements model enables automated logical proof of requirements, answering verification questions such as:
· Is every data element defined in the dictionary?
· Where and how is data used in the system?
· Can every output data element be mapped to an input element or to a process algorithm that computes it?
· Does every input have a validation algorithm specified?
· Is every input data element input actually used by a system process?
· Is every stored data element that is accessed by any process other than the one that stores it?
· Is every data element used by a process algorithm
A requirements model is a set of metadata-tables and diagrams that capture
· System data flows
· System processes and nested sub-processes with steps and rules (algorithms)
· System data model
o Normalized data entities, entity attributes & entity relationships
o Data dictionary (all entities and entity-attributes)
o Data storage and storage access by system processes
A computer-assisted requirements model enables
· Streamlined capture and automated system analysis
· Data management
· Formatted analysis reports
· Data flow analysis
· Disambiguation, resolution of aliases and duplicates,
· Requirement traceability
· Change management
· Code generation
Requirement models can be developed using the unified modeling language (UML) or other machine assisted languages and systems. They can also be developed using facilities as simple as Microsoft Office™ and Visio™ or non-procedural DBMS apps like WebFocus™.
I have personally led development of requirement models using all three approaches.
Occasionally, customer business analysts (with or without support from the development team) create this kind of requirements model in lieu of a narrative-text document.
Sometimes, the development team derives this level of software requirements from the customer’s narrative-text requirements and sometimes this level of specification doesn’t happen at all. Requirement errors, ambiguities, and omissions then emerge as defects and issues in testing or production leading to costly and time-critical rework.
Ideally, the development team would provide automation support and the metadata content would be jointly developed by business and IT analysts. Once developed, the tables and forms would be available for reuse and leveraging by subsequent projects—a baseline model of a baseline production application.
Sometimes this level of software requirements is called a “system analysis” or “requirements analysis”.
Sometimes this level of software requirements is mistakenly called (and confused with) “logical design”. It should precede and enable effective design decisions. It should ideally contain no elements of physical software architecture.
It may seem a “big deal” to develop this level of software requirements. I can attest it is less so than a room of business and IT people sitting glassy eyed around a conference table while wading through reams of narrative-text, spawning defects error and omission that may not surface until testing—or worse until the system is in production operation.
There is probably no business or software process as ripe for qualitative and quantitative improvement as requirements development. Requirements process-improvement is a perfect conjunction of business six-sigma and IT CMM-I initiatives.
What’s process capability and organization maturity to you?
The answer to that question depends on who you are, what professional role(s) you perform and what interest, if any, you have in process.
I’ve written in other posts about what the Capability Maturity Model (CMM) is and how it has figured prominently in my own systems career. As a designated change agent, charged with teaching and leading CMM-based software process improvement, the model effectively capped and defined my IT career. So, my interest perforce was and remains comprehensive—the model in its entirety as a basis for organizational evolution. That meant formal training and certification for me, much effort by many people, and a series of formal assessments for my organization. That’s a big, expensive (time and money) undertaking that many IT shops could not afford.
If you are an IT executive and believe that aggressive pursuit of a formal process-capability or organization-maturity rating is imperative for your business model then, by all means, budget —for (expensive) staff training, many person-years of effort, and consultants expert in CMMI-based process-improvement—and have at it. I’ve been there, done that, and seen it tried in numerous organizations with mixed results. Done correctly, for the right reasons, it can be transformative for a group. Done incorrectly, for the wrong reasons, it can cost a lot of time and money with no tangible benefits.
Comprehensive commitment to staged implementation of CMMI, with or without formal training and assessments, is not the only way an individual or an organization can benefit from using the model.
For an organization, whether engaged in systems & software engineering or other disciplines, the model can serve as:
- A sensible roadmap for process improvement
- An educational resource on process and process-area relationships
- An inventory of best practices across twenty-two process areas
- A specific guide to assessing and improving process capability in any one (or more) of those areas
- A generic guide to assessing and improving process capability in nearly any discipline
An important point to emphasize is that the CMMI is a document, available for download, not some expensive application suite that you must purchase and master to assess its utility and value. If you can read at a college-level and function within an IT (or other) organization, there is nothing in the model beyond your intellectual grasp. Sure, there are hundreds of fine-point details and relationships, but I summed up its essence in less than 1000 words.
If you are an IT executive—or director, department or project manager—cultivating an intellectual understanding of the model can give you an informal but objective basis for your own assessment of your group’s process capabilities and maturity and your own consideration of what areas offer the best prospects for an effective ROI from process improvement.
If you are an IT individual contributor—managerial, staff, or technical—the model offers a comprehensive set of best practices across twenty-two process areas for process and project management, systems and software engineering, and support. Your organizational role(s), job description, and functional assignment(s) lie at the intersection of some subset of those practices.
If you are a manager in some other discipline, there may be a specific CMMI model for you: including at present Integrated Product & Process Development (IPPD) and Supplier Sourcing. There is a People CMM (PCMM) for HR, and a Personal Software Process (PSPS) geared to individual and team softwar-engineering. I cite these to emphasize the generic applicability of the fundamental Capability Maturity Model.
If your professional discipline lies beyond these IT-centric areas, you can still benefit from study, understanding, and even application of specific model concepts and components. My experience and observations have led me to believe that three of the four CMMI categories—process management, project management, and support— can be applied to virtually any engineering discipline. I include any form of information engineering—including media, writing, and publishing—as well as service-disciplines (as diverse as marketing and health care).
Personally, in a second career as a digital media producer, I have applied many best practices from the CMM to studio processes for production, post-production, and distribution. I am also working on a Personal CMM project that applies capability and maturity to processes for life management and self-improvement. I’ll be writing about both “models” in future posts.
Your takeaway from this discussion—I hope—is that you can use and benefit from even informal and personal use of CMM for professional and personal growth and profit.
Essentials elements of CMMI-based process-improvement
This article and series addresses the CMMI model for systems and software engineering. There are additional CMM that address other disciplines. Understanding the essential concepts presented here enables a more than cursory familiarity and gives you a basis for informed discussion and further inquiry. Unless otherwise noted, this series article and series will reference the staged representation (organization maturity) of the model rather than the continuous representation (process area capability).
What is the CMMI?
CMMI is a descriptive model that integrates (showing dependencies and work-product flows) twenty-two process areas common to both systems and software development.
CMMI is also a prescriptive model that provides goals and best practices for software-process capability and software organization maturity.
CMMI is widely accepted and used by IT organizations worldwide and there is a wealth of data, analysis, and reportage that documents its effectiveness and potential benefits in terms of improved product quality and ROI from CMM-based software process improvement.
Essential CMMI Terminology
A process area is a set of goals and practices for one development area (E.g., Project Planning or Requirement Management). There are 22 process areas in CMMI.
Specific goals (47) are objectives that characterize what is needed to satisfy the purpose of one process area.
Specific practices (192) support specific goals.
Generic goals (4) —common to all process areas— to institutionalize process at a given level of maturity
Generic Practices (10) implement common features to ensure that any process area will be effective, repeatable, and lasting.
Maturity Levels & Process Areas
Process areas are sequenced and staged at one of four maturity levels and for a foundation for process areas at higher maturity levels. Sequenced and staged deployment creates opportunity for practice and work-product synergy and leveraging between process areas.
1 — Initial Level (ad hoc process)
In a level-1 organization, project-process is ad hoc and frequently chaotic. Project success and work excellence depend on the competence and heroics of people in the organization and cannot be repeated unless the same competent and experienced individuals are assigned to the next project.
2 — Managed Level Process Areas
A level-2 organization ensures that work and work products are planned, documented, performed, monitored, and controlled at the project level.
- Requirements Management manages a project’s specific product requirements and verifies consistency between requirements, work plans, and work products.
- Project Planning establishes and maintains work plans to identify activities and work products.
- Project Monitoring and Control tracks work plan progress and signals significant deviations for management action.
- Supplier Agreement and Management manages formal agreements for acquisition of products and services from suppliers external to the project
- Measurement and Analysis develops and maintains measurement capabilities that support project and product MIS.
- Process and Product Quality Assurance assures adherence of project process and associated work products to applicable process descriptions, standards and procedures.
- Configuration Management establishes and maintains the integrity of work products using configuration identification, control, status accounting, and audits.
3 — Defined Level Process Areas
A level-3 organization establishes and maintains standard processes and work products, which are improved over time. Project standards, work activities, and work-product descriptions, are tailored from organizational process assets. As a result, the processes performed are consistent, measurable, comparable, and reusable across projects and across the organization.
- Requirements Development models customer and product requirements in detail sufficient to mutual understanding and development of technical solutions.
- Technical Solution designs and builds solutions for requirements—as products, product components, and product related processes.
- Product Integration assembles the product from new and baseline components, ensures (by testing) that the product, functions properly, and delivers the product.
- Verification (typically by inspection) assures that work products meet specified requirements.
- Validation (user and acceptance testing) demonstrates that a product or product component fulfills its intended use in its intended environment.
- Organizational Process Focus establishes and maintains sets of standard processes and process assets, and identifies, plans, and implements process improvement activities.
- Organizational Process Definition establishes and maintains a reusable set of standard-process assets.
- Organizational Training develops learning assets and cultivates skills and knowledge that enable satisfaction of organization and project requirements.
- Integrated Project Management establishes and manages projects, project dependencies, and the involvement of the relevant stakeholders—according to a defined process.
- Risk Management identifies potential problems before they occur, plans activities to track risks and contingent activities, as needed, that mitigate adverse impacts.
4 — Quantitatively Managed Level Process Areas
A level-4 organization manages standard process and projects using statistical and other quantitative techniques. Quantitative objectives for product quality, service quality, and process performance are established and used as criteria for management throughout the project life cycle.
- Organizational Process Performance establishes and maintains a quantitative understanding of standard process capability and performance across projects, and provides performance data, baselines, and models to quantitatively manage the organization’s projects.
- Quantitative Project Management employs statistical process control to quantitatively manage the project’s defined process and achieve the project’s established quality and performance objectives.
5 — Optimizing Level Process Areas
A level-5 organization continually improves standard processes based on an understanding of the common causes of variation inherent in processes.
- Organizational Innovation and Deployment selects and deploys incremental and innovative improvements that measurably improve the organization’s processes and technologies and support the organization’s quality and process performance objectives as derived from the organization’s business objectives.
- Causal Analysis and Resolution identifies causes of defects and other problems and takes action to prevent them from occurring in the future.
In Summary CMMI prescribes
- Best practices for continuous improvement of process area capability and organization maturity
- The staged representation of CMMI prescribes a sequence for process area deployment that affords optimal leveraging of process and work-product assets between process areas.
- Process-area capability and organization maturity co-evolve in ways that complement both.
A brief history
After getting a degree in English Literature and writing in 1968, I joined the exploding information revolution, training as programmer analyst.
My IT career spanned computer and OS evolution from early third-generation mainframes through virtual machines—from flat indexed files through hierarchical and relational database—through numerous generations of Moore’s law chip evolution in minicomputers, microcomputers, tablets, and smart phones—from LAN through WAN and VPN—from Arpanet to the Internet and web-based applications. I designed, led, and managed development of online data based information systems for organizations as diverse as Harvard University, AT&T, the Federal Reserve Bank, and Merrill Lynch. I also did groundbreaking work at those institutions in methodology, CASE, and software process.
At Merrill Lynch (ML) in the decades on either side of the millennium, my career intersected with the Software Engineering Institute’s Capability Maturity Model for Software. CMM-based software process improvement became my full-time job for the remainder of my corporate IT career. I had formal SEI training as an Assessment Team member and participated in a number of weeks-long projects and gap analyses to assess organizational process maturity in numerous ML/IT departments. I became an SEI-certified CMM Trainer and presented numerous overviews and formal classes. At my corporate home base, the Jacksonville Solution Center, I was a leader in that department’s rapid evolution to CMM Level 5, the model’s highest organizational maturity rating. We had quantitative project, process, and quality management, and a program for continuous process improvement in all process areas addressed by CMM-SW. Along the way, I also received formal training as a Six Sigma Black Belt, which enabled me to develop the quantitative analysis and management practices necessary for CMM levels 4 and 5.
The CMM was evolving, even as we worked to apply it. The base CMM had been applied to numerous disciplines, in models for systems engineering, software engineering, software acquisition, workforce practices, and integrated product and process development. A comprehensive model, aptly named CMM Integration (CMMI), was released and is now the de facto standard.
After my departure from ML, I “discovered” another discipline that benefits from CMM integration. I’ve had a second career in digital media production, operating a “boutique” studio. I quickly realized that CMM processes were applicable to studio work. Although there are obvious differences between system/software engineering and media production, there is some overlap in project planning and management, requirements development & management, product integration, verification, and validation. I have informally adapted specific practices to plan and manage media projects, to continuously improve my process for making, editing, and distributing digital media, and to manage an ever-growing archive of digital media and reusable project assets.
My IT career left me with two lasting legacies that are relatively independent of galloping information technology: a systems/process analysis skill set and understanding of CMMI—as a model—and its specific practices for software engineering. I can still analyze requirements and develop a structured model of an application (or business process) and I continue to experiment with, reflect upon, and write about the CMMI’s profound utility and application in a growing array of disciplines—including media production.
I plan to explore the model over a series of articles in this blog and maybe a book. I write to consolidate, enhance, and share my own experience and knowledge. I don’t presume to write for experienced CMMI “experts”. I write for IT executives, managers, analysts, designers, and engineers who may want or need to learn more about the subject and may enjoy “readily digestible” forays into discrete aspects of the model’s daunting complexity. I also write for managers and practitioners in other disciplines who may be curious to see what a capability maturity model might offer to their work.
If you are interested, I invite you to “follow” this blog and join me on this journey of exploration and discovery.
Seven ways a project (plan) can fail and foredoom a project
The primary goal of a project plan is to deliver the best tradeoff between reality and commitments—to satisfy as much of its requirements as possible while making and keeping realistic commitments given real constraints of time and money.
What’s a failed project? One that doesn’t meet its commitments for—
- Products or services that perform as specified by the customer
- On time
- Within budget
This article specifically addresses software projects but the points made apply to any kind of work plan.
Why do projects fail?
Many—perhaps most—software projects that fail do not fail in software development.
Many projects fail before software development actually begins.
They fail because their plans are defective. In that sense, they are planned failures.
Failure in planning is planning to fail
Seven Ways a Project (Plan) Can Fail
- The project isn’t sufficiently estimated and scheduled
- The plan does not sufficiently address project risks and mitigation.
- The plan does not sufficiently address project resource requirements
- The plan does not sufficiently address timely acquisition of knowledge and skills
- The plan does not sufficiently address stakeholder involvement
- The plan proceeds without strong commitments by all stakeholders
- The plan is not maintained throughout the project life cycle
The keyword here, in case you didn’t notice, is sufficiently.
The project isn’t sufficiently scoped, estimated and scheduled
Project scope is the framework for the plan. The goal is to identify all stakeholders (see below), the business goals that will be addressed, the conceptual product architecture and the work to be performed.
The work plan is typically developed as a hierarchical work breakdown structure (WBS) that shows the tasks to be performed in each area of software engineering and support. It is (or should be) based upon the envisioned product architecture.
Mistakes often made here (in no particular order):
- Inadequately detailed planning
You don’t want to wait until the end of the current phase (or cycle) to find out how well your estimates are tracking against actual results. Once you have sufficient size and complexity data from a phase, you should plan the next phase at the lowest practical level of detail—tied to specific work-product components (requirements, designs, code, test plans et al). Unfortunately, you can only do this by phase (or cycle for spiral or agile lifecycles).
This leads to the next possible mistake.
- Prematurely detailed planning
Work plan details depend on and correspond to available product metadata.
You can’t plan detailed work for software product requirements analysis without actual requirements. With requirements in hand, you can scope the features and functions (however named) that will reviewed, clarified and modeled by the initial analysis phase of the project. You won’t have the analysis work products you need to do a detail plan for product design until analysis produces them.
As the saying goes, you don’t know what you don’t know.
Detailed planning is only indicated for the next phase or cycle of the project. Anything more is speculation not planning.
This is one reason project planning must be reiterated at project milestones to use newly developed work product metadata—specifically size and complexity. Until you can quantify how much work will be performed and its level of complexity, you can’t plan detail tasks to do the work.
- Failure to develop estimates for work product size and complexity
This is one of the most common defects seen in software project planning. Aside from lines of code—and that only rarely—very little work product sizing is estimated. Every software work product in every phase—regardless of methodology—offers some basis for sizing and assessment of complexity. Work product size and complexity drive—or should drive estimates for work effort.
- Confusion of work effort and work duration
Many projects estimate effort in terms of calendar time—days, weeks, or months.
Effort measures how much human effort it will take to do the work. Duration is how long the work will take.
It takes longer then one week for one full-time resource to expend forty hours of effort on a task.
Effort should be estimated; duration should be derived.
Effort should be estimated based upon size and complexity. Balancing effort with resource availability should yield the correct duration.
- Failure to use objective models for effort estimation
As mentioned previously, work size and complexity should be the principle parameters for estimating effort. An estimating model should use historical or industrial data for comparable work whenever possible.
The plan does not sufficiently address project risks and mitigation.
Nothing is without risk. Project risks can include hurricane season, timely availability of resources, volatile and changing business requirements, and timely completion of related projects, among other thing. Virtually every aspect and section of the project plan may pose risks that require monitoring and mitigation.
Risks should be itemized, quantified, and prioritized for their potential impact, probability of occurrence, and timing. Work to monitor the risk and mitigate the impact should be planned and resources allocated on a contingency basis. If you don’t do this, you increase the risk that your project will be late and or over-budget.
The plan does not sufficiently address project resource requirements
You can’t develop a realistic project schedule without knowing two things: work effort estimates and available resources to do the work.
This process can founder on the myth that resources are available full time to do the work. Therefore, a forty-hour estimate for effort becomes one week on the schedule. Resources are never available full time. There are myriad ongoing overhead tasks and events—time and status reporting, production system problems, staff meetings, employee events et al—that interrupt time on task. For lack of anywhere else to report the time, team members report it against assigned project tasks and slippage ensues.
A realistic plan devises an algorithm for computing realistic availability metrics. Depending on the organization, real availability is likely to be in the 60-80% range.
The plan should also identify specific non-routine equipment and software resources along with the effort and time required to assimilate and integrate them into ongoing project work.
The plan does not sufficiently address timely acquisition of knowledge and skills
Typically, a project team is assigned because proven individual and team skill sets are a good match for a project. In the ever evolving and rapidly changing IT landscape that will not always be a given. Minimally, project planning should assess skill needs and skilled-resource availability and plan time and resources for closing any gaps.
The plan does not sufficiently address stakeholder involvement
Project stakeholders may include customers, users, other business and IT organizations, other systems, other projects, vendors, and the project team.
Strong stakeholder commitments are critical to project success.
The plan should document:
- Stakeholders and specific commitments
- Specific roles and responsibilities, tied in to specific scheduled events in the plan
- Time, effort, and resources needed to ensure timely stakeholder involvement
The plan proceeds without strong commitments by all stakeholders
Without formally documented organizational commitments, authorized by appropriate levels of management, a project can flounder and founder through no fault of the project team.
The plan is not maintained throughout the project lifecycle
Project planning is too often regarded as a preliminary front-end to the project lifecycle. A dynamic instrument, the plan changes on a daily and weekly basis based upon actual work performed and actual events—planned or unplanned. That assumes ongoing monitoring and timely feedback on every critical component that has been cited in this post.
If you estimate it, you need to track it and revise your estimates as needed based upon actual results.
The planning process should be reiterated at prescribed milestones in the project lifecycle, at completion of phases in a traditional waterfall lifecycle, or thread-completion in a multi-threaded waterfall cycle, or completion of a cycle in a spiral or agile lifecycle. At each point, there is new and expanded software product metadata and verification data that will enable a greater level of detail in the next phase or cycle of development.
There is much overlap between project planning and project monitoring & control and project management encompasses both processes throughout the life cycle of the project.
A shorthand way of saying all this is that a project can fail because there is no appropriate planning process. An appropriate planning process anticipates and satisfies all the potential causes of failure cited above.
Essential elements of work
For our immediate purpose, let’s define work as purposeful activity—mental or physical.
That definition scales up and down—we can speak of the work done making a mental decision, or writing a novel, or constructing a skyscraper. In all cases, it is purposeful mental or physical activity.
Who cares about work? Anyone who needs to estimate, plan, manage, or do it. The more you understand work in the abstract, the better prepared you will be to handle it in your life and on the job. As I wrote in a recent article, work might be virtually anything you do—including play.
I’m writing about work because I’m a process analyst and work is a kind of process or sub-process common to most other processes. Work-process is fundamental.
We typically itemize work as tasks—things to be done—on a To-Do list or in complex project or business-service plans. Most of us are concerned with tasks we need to do or have done, so let’s focus on tasks.
Let’s try something simple but complex enough to be worth a little planning.
Task: Wash your windows
You want to plan a housekeeping task–wash the windows and screens in your house. I’m going to keep it simple and assume you live in a single-story house with only one kind of window and no glass doors. What do you need to consider?
Available workers: yourself
Materials: Glass cleanser, All-purpose cleanser (for screens)
Resources: Bucket, ladder, spray bottle, wiper, squeegee, sponge, hose
Time available: one week (your in-laws are coming for a visit)
These estimates are arbitrary and serve only as examples of itemized estimation. Your estimates might be higher or lower.
Work Size (how much work?)
- How many windows/screens? 10 windows/10 screens
Work Effort (how much effort?)
- How much effort (in minutes) to remove each screen? 10 minutes
- How much effort to wash and rinse each screen? 10 minutes
- How much effort to replace each screen? 15 minutes
- How much effort to clean the inside of each window? 10 minutes
- How much effort to clean the outside of each window? 10 minutes
- Total Effort (Work Size x Work Effort) (550 minute = 9 hours:10 minutes)
- Windows (10 x (10+10)) = 200 minutes
- Screens (10 x (10 + 10 + 15) = 350 minutes
Miscellaneous Time (190 minutes = 3 hours:10 minute)
- Breaks: 4 x 15 mins each = 60 minutes
- Set up for each window (ladder and tools) 10 min
- Total set up (windows x setup) (10 x 10) = 100 minutes = 1 hour: 40 minutes)
- Replace soap/water in bucket (5 minutes x 4 times) 20 minutes
WORK DURATION (How long will it take?)
Minimum Duration (Work Effort + Miscellaneous Time)
(9 hours:10 minutes + 1 hour: 40 minutes) = 10 hours: 50 minutes
This is the least clock time for one person (you) to do the work.
If you had a second worker (and had the tools necessary) and split the work evenly, or if you divided screen and window work between two people, you could get the job done in a single workday.
Let’s assume you’re on your own and don’t intend to work an eleven-hour day doing nothing but windows. You could “chunk” the work and spread it over as many days as you have available before the in-laws arrive. Of course, each day increases the risk (weather, family emergency, whatever) that you won’t finish in time.
Let’s assume you decide to spread the work over two days, striking a minimal balance between your physical stress and your risk of bad weather.
Planned Work Duration: 2 DAYS
You block out five and a half hours on two days in your calendar.
There’s your simple work plan. I’ll take up tracking and managing work plans in later posts.
It took about one hour to plan eleven hours of work. You might not bother to plan a small task, and I wanted to illustrate the work planning process. When you are planning a project measured in workweeks, months or even years, plan to allocate a proportionate amount of effort and time to do the plan.
Plan the planning process.
There are a couple of key points here to which I will return in future posts:
Effort and duration are both measures of work in time but are different. I’ve seen to many project plans that confuse effort and duration — in doing so they often overestimate the effort and underestimate the duration.
As the saying goes, “Size matters!” You can’t effectively estimate effort without consideration of “how big” “how much” or “how many” things the work will address, whether the things are products (like cleaned windows) or services (like cleaning windows).
When considering work size, you also need to consider work complexity—both affect effort and duration. I’ll elaborate on work size and take up work complexity in later posts.
Newer is not necessarily better.
I have been a Mac user since the first Macintosh appeared in 1984—with its tiny monochrome screen, its “whopping” 128K (1K being a thousand characters) of memory, a micro-diskette drive for disks that were not floppy but compact and rugged and could store 80K —and the Ma sported that new thing, the mouse.
I had not previously been tempted to buy a personal computer with their command lines, textual interfaces and lack of any capabilities not already better satisfied by the powerful mainframe computers I used in my work. Then I saw the ads, the graphics, and the point and click menus. I was hooked.
I never did much with that first Mac except write on it and I loved its WYSIWYG (what you see is what you get) displays, its ability to change format and fonts without imbedding any cryptic codes. I fancied myself a “desktop publisher”.
Then, over the years, Macs evolved. Their memories expanded, augmented by internal hard drives and micro-diskettes with greater storage capacity. I recall being astounded by my first internal drive that could store a megabyte of data—a million characters! It could hold a book!
Mac cases changed—repeatedly—growing and shrinking—screens lit up with color and grew in size. The microchip revolution was raging and each Mac could hold more data and process it faster—but all that was also true of Mac’s rivals running Windows.
I remained a loyal Apple/Mac customer because it was easier to use and its basic applications—for mail, calendars—and then music—iTunes—delighted and made me more productive at home. At work, I still used Windows computers. They were difficult and cumbersome, requiring a support infrastructure to keep them maintained and operational. I had no trouble keeping my Macs humming along.
What ultimately hooked me was the rollout of desktop video editing. I had experimented with 16mm filmmaking as a youth and never pursued it because it was too expensive—for cameras, lenses, editing benches, stock and processing. I was unimpressed by VHS video, even as the cameras became more sophisticated and less expensive.
Two things happened around the same time—affordable feature-rich digital video prosumer camcorders and Apple’s Final Cut Pro (FCP) desktop editing tool. Now, I could shoot exquisite video and edit it on my desktop. I renewed my vow of customer loyalty to Apple.
FCP and the Mac co-evolved until the application became a virtual studio suite—Final Cut Studio—and the desktop computer became a self-contained 27” high-definition screen. My early experiments with digital video—short films—evolved into a growing production business and my second career.
I had everything I needed on my desk to write screenplays, to edit, finish and distribute High Definition video—all integrated seamlessly with the Mail, iCal, Office, and Quicken apps I used to run my business. I was a one-man band and studio, and loved it.
It all flowered for me in Final Cut Studio 3 and Final Cut Pro 7 running on Mac OSX 10.6.8—my beloved Snow Leopard operating system.
Alas, then Steve Jobs departed and Apple became a phone and media company. First, Apple betrayed its loyal base of Final Cut users, who had been integral to the Mac’s success, ceasing to support it and replacing it with a jazzed up iMovie that they audaciously call FCPX. The “replacement lacked key features and required a completely new learning curve. Apple’s loss was Adobe’s gain as video producers and editors migrated en masse to Adobe Premier and Adobe Creative Suite. I stayed gamely with FCP, even as Apple ceased to support it. It still worked, still did/does everything I need—until it didn’t.
Apple Mac architecture and Operating systems continued to evolve and at some point, FCP would not work anymore, so I stayed with Snow Leopard and eschewed system upgrades on my studio Mac.
When that Mac died, I replaced it with a refurbished vintage iMac that will run FCP under Snow Leopard, and have done so several times. Every day I back up my disk image onto an external drive. When the computer dies, I replace it in kind, and resurrect my familiar work environment from the backup.
I’m not a born-again Luddite. I still buy new computers. I have a newer iMac running the current Apple OS, mainly because other applications I use—like Chrome—no longer run on my old platform. I like Mac OS less with every release—it gets less accessible, requires more machine to operate, and runs more slowly.
My old Apple Mail program is no longer reliable, but I find the current version on my “new” iMac lacks features I’ve come to require. So, I’ve switched to Web Mail. The new version of iCal lacks features I depend on for time-management—like and integrate To Do list that lets me prioritize and sequence task before I commit them to the calendar. I still use iCal on my trusty “old” Mac.
I recently purchased a Windows 10—yes Windows–laptop because it’s a better buy than an Mac laptop, runs faster, and seems to require less support than the older Windows I used in my corporate IT days. My next personal desktop may not be a Mac, for the same reasons.
Like corporations that must keep old mission-critical business systems running on real or simulate “legacy” environments, so do I—for writing, video work, and trusty old iCal on my Legacy iMac. As long as I can find a Mac that will run it, I’ll keep the legacy alive.
There is no such thing as “you.” There are no things—only processes.
What we think of as things— are actually our mental snapshots of processes that are continuously unfolding in time.
A rock seems to be a thing because it seems unchanging—it’s actually “rocking” along very slowly through its own life-cycle as a small sub process within larger geological processes.
Heraclitus, a pre-Socratic philosopher, is most often cited for his maxim, “No man ever steps in the same river twice.” It’s probably one of the Ur-philosophical memes of western civilization. You’ve read it or heard it from philosophers, gurus, and your stoner friends. Maybe you’ve even said it yourself.
Heraclitus used “river” to illustrate the primacy of process—ever-changing process. That “man” stepping into the river is a process too. The next time that “man” steps in that “river” neither will be the same. Process—“everything”— is always irreversibly changing—subject to “time’s one-way arrow.” All is change and motion.
You are a process. As Buckminster Fuller famously said, “I seem to be a verb.” We are all verbs.
You are a single instance of a human being— a process with an approximate cycle time of 80 years —from conception through death. Like every process, “you” are always changing—are not the same physical or mental human being that began reading this post. You change, in countless ways, on every level of your being, every moment. Being human is the most complex process in the known cosmos—worthy of profound awe and respect.
There is one primary parent process in our reality—the cosmos itself. It emerged from a single point as a big bang that’s banging still and will be through its own cosmic life cycle, which physicists are still trying to figure out.
Every phenomeno within this universe—space, time, matter, gravity, galaxies, suns, planets, life, you— is a sub-process that has emerged from and within that ongoing explosion. Sub processes emerge and unfold within parent processes. They are nested like Russian dolls.
You are nested within your family process, which is nested within human processes—political, socio-economic, and cultural—that are nested within myriad levels of Earth process—and so it goes up to the cosmos—the BIG DOLL. It’s not neat and hierarchical—nature loves networks. Processes interact and overlap in complex webs of relationship. A process may have many parents, siblings, and relationships, and likely has sub processes nested within it—just like you do.
Yeah, this is “deep stuff,” worthy of reflection and it’s all been said before, one-way or another. So, what’s my point here? What’s your take away as a human being, a systems analyst, engineer, manager—or whatever?
Process is primary; systems are not.
Process antecedes, precedes and supersedes any system we impose upon it. A guru once observed that, “All systems are foolish.” In the sense that we think systems can actually control process that may be true.
Every thing is a sub-process and unfolds within a network of related processes.
This is—or should be— one of your primary axioms as you try to understand any process—anything. Nothing, no process, exists in a vacuum.
Don’t make the mistake of thinking you can understand any process in isolation—yourself or any person, any “thing”, any situation, any natural process or any human system.
You hear much lately about how “Content is king.” I would argue that, “Context is king.”
How many people are dysfunctional because their personal and interpersonal processes are askew or not synching?
How many relationships have you seen fail because one or both parties failed to comprehend the family, social, or cultural processes within which they emerged? Those parent processes shaped them and—like it or not—still operate within them.
How many business ventures fail because entrepreneurs fail to understand the community, market, or legal system within which the venture must operate?
How many political campaigns fail for the same reasons?
Throughout my IT career, I saw numerous software projects fail because they were developed without due consideration, understanding, and integration of the project’s process context. The software process used in development, the business process being automated, a parent application to be integrated, an operating system, a network, the sponsor, the regulatory, social, and economic systems in which it must operate—all need to be considered in a successful software project—or software venture.
This may seem like common sense and, in my experience, it’s anything but. What’s your take? Leave a comment . . .