Is Missing Logic an illusion of schedule quality?

Is Missing Logic an illusion of schedule quality?

Scheduling consultants often mention “Missing Logic” as an obvious schedule quality check.
I believe it is not as obvious as it seems. This check may create an illusion of good quality of dependency data and hide critical issue(s) that may cause project delay.

The rule of thumb is “Each activity, except first and last activities, must have a predecessor and successor”. It is logical to have a metric that verifies if this rule is followed. This metric is known as the “Missing Logic” or “Open ends” metric. If a schedule has open-end activities, it requires  attention.

However, if there are no activities without a predecessor and a successor, it DOES NOT mean there are no “lost” or “broken” dependencies in the schedule.

Even more, the “lost” dependency may impact the critical path and reduce the duration of the schedule, but the “open ends” metric still would be “Green”. It creates an illusion that the schedule doesn’t have issues with dependencies and no additional check is required. The “lost” and “broken” dependencies would not be identified until it is too late, and the project delivery dates are impacted.

Broken Dependency

A“broken” dependency is a dependency with incorrect characteristics (type, lag, ect).

Path Convergence & Path Divergence

Any schedule is likely to have activities with multiple predecessors and/or successors. It is possible that after a critical dependency has been removed, the schedule still doesn’t have open ends activities.

Path Convergence
A relationship in which a scheduled activity has more than one predecessor.

Path Divergence
A relationship in which a scheduled activity has more than one successor.

Original schedule:                                                                                                         

Critical Path: A B D E G

“Missing Logic” check: Green

Then a scheduler by mistake deleted one critical Dependency between D and E activities. Now the schedule has a new critical path and shorter duration, but the “Missing Logic” metric is still “Green” as all activities (except first and last) still don’t have open ends.

This example shows that we need another approach to manage the quality of dependencies.

The only way to check the quality of dependencies is to compare the list of dependencies against Corporate Norms or any other reliable set of dependencies, for example, in a baseline schedule.

As Primavera and MS Project users don’t have direct access to a list of dependencies, one of the options could be to load the schedule into the tool that has this capability.

Spider Project provides access to a list of all dependencies with all dependency characteristics: predecessor and successor codes and title, dependency type, lag, lag units, lag type, lag calendar, etc. Dependency quality analysis could be performed within the tool, or dependencies could be exported to Excel or BI tools.

Conclusion

Missing Logic” metric is a required but not sufficient metric. It may create an illusion that the schedule dependencies are in good shape.

Dependency quality analysis have to be performed against reliable dependency data and incorporate analysis of both “lost” and “broken” dependencies.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Schedule Quality vs Schedule Maturity

Schedule Quality vs Schedule Maturity

Very often project participants mix up project schedule maturity and schedule quality concepts. While these two aspects are interdependent, misunderstanding the difference is often causing conflicts between a project manager, a scheduler and a PMO team.

Target schedule maturity level defines:

  • Scheduling tools
  • Processes
  • Methods and
  • Techniques

for schedule development, maintenance, optimisation and reporting. It also helps to define skills and capacity requirements for schedule development and maintenance.

Schedule Quality is a set of metrics to measure the critical parameters of the scheduling model.

Project performance indicators used to measure project progress (EVM, Success Probability Trends, Schedule KPI, etc).

Schedule maturity defines:

  • Methods that are going to be used to measure project progress and performance
  • Tools that support these methods
  • Data points required for these methods
  • Minimum required quality thresholds of these, or related, data points

Schedule quality defines the reliability of performance indicators.

Let’s consider what may happen when schedule quality is measured without defined schedule maturity. Many of PMOs and master schedulers do understand that poor schedule quality leads to low project performance and they are keen to address this issue by implementing schedule quality assessment.

However, if the required schedule maturity level is not defined, they actually don’t know what to measure. Some would recommend to use DCMA 14-points schedule assessment to measure schedule quality, as it is a well-recognised approach.

Well, firstly, they are not able to explain how the standard defined to control CONTRACTORS schedules in DEFENCE industry in USA is actually applicable for their type of projects, industry and country.

DCMA Metric N4: Total number of activities with Finish to Start (FS) logic links >9

If some projects in a portfolio by nature should have more than 10% of SS, FF or SF type of dependencies the metric is not just incorrect, it is actually misleading. Projects with incorrect logic will be shown as green and project with correct logic shown as red.

Secondly, they are not aware that DCMA 14-points schedule assessment is not a schedule quality assessment. 3 out of 14 metrics measure project performance and not quality of a schedule.

Lastly, the required schedule maturity defines which of the schedule quality metrics are applicable.

In one of my engagements, I have seen how a master scheduler forced all schedulers to populate resources for each activity based on DCMA metrics N10 (all incomplete tasks should have resources assigned), even for vendor driven activities. The schedulers spent a lot of effort just to make sure that the metric is green. Also, some of these schedules had missing critical dependencies (as a result had incorrect critical path). However, Metric N1 (the number of activities with missing a predecessor, successor or both should not exceed 5%) shows that logic is “green” as missing dependencies haven’t exceeded 5% of activities.

  • In many cases schedule quality could be just controlled via a set of filters implemented in a scheduling tool and would not require any additional expensive tool. Schedule optimisation, in opposite, often could not be completed in just in Primavera or MS Project, without a schedule optimisation tool applied. (Link to tools post)
  • A project must have a fit-for-purpose schedule maturity level to achieve desired targets. A lower or higher than required level of the maturity would move away the focus from project delivery to some none critical project activities.

The target level of maturity defines what a “good quality” schedule actually means. Without such a definition schedule quality reviews are full of arguments and subjectivity.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma