Schedule Quality Checks. Lags and Leads

Schedule Quality Checks. Lags and Leads

We already discussed two types of schedule quality metrics: Missing Dependencies and Activity Constraints. Another important area to control is activity lags and leads.

A project delivery schedule must reflect real-life processes and the way how the work is going to be performed. Often processes include essential shifts between parallel or sequenced activities. The shift between activities is called dependency ‘Lag’ or ‘Lead’.

Lag is the amount of wait time between commencement and completion points of different activities.

Lead is the amount of acceleration time between commencement and completion points of different activities.

In simple words, a Lead is a negative Lag.

Is it good or bad to have Lags in the schedule?

Activity lags and leads are extremely important as they allow to optimise project delivery by performing tasks in parallel or overlapping activities. Project managers need to look at any opportunity to use lags and leads in their schedules.

It is hard to imagine a DELIVERY schedule without Lags. If a schedule doesn’t contain any lags, it is a good indicator that potentially the schedule is not used for delivery, only for reporting.

Also, there are many situations when dependencies with leads may require. It could a contractual obligation or ATAP (As Timely As Possible) scenario.

Example: The contract states that the Notification needs to be issued 30 days prior to the completion of construction.

However, lags/leads require special attention for a number of reasons:

  • They could be used to hide contingency or delay;
  • Some scheduling systems (like Primavera and Microsoft Project) don’t provide front-end access to the dependency table and lack features to control dependencies with Lags. It adds significant schedule risk to a project.

So,

  • All lags must be explained. Only authorised and accepted lags should be in a schedule;
  • Clear process has to be established to manage dependencies with lags.

The Schedule quality assessment has to include metrics to control lags and leads.

Before reviewing the Lag metrics, let’s understand how lags work in real life and how scheduling tools could help us to simulate work with dependency lags.

    Lags & Leads

    There are 4 main types of dependencies and scheduling tools that allow each of them to have a lag or a lead. It gives us 8 possible theoretical scenarios of application lag/leads in a schedule.

    However, not all of these scenarios are actually valid scenarios.

    For example: ‘SS – 5 days’ scenario has a logical conflict. On one hand we said that the successor can only start after the predecessor has started. On another hand, we said that the successor can start before the predecessor.

    These scenarios have to be eliminated from the schedule. In this case a schedule quality metric to identify such scenarios becomes very handy.

    Time and Volume Lags

    By nature, there are two types of lags in projects: Time Lags and Volume Lags.
    Time lags are required when by PROCESS, there is a requirement to have a “wait” TIME between activities.

    Volume lags are required when by PROCESS, a successor activity could commence only after the required volume of work of the predecessor is achieved.

    When ‘volume lag’ is presented by ‘time lag’ it may create an unexpected issue. If the predecessor activity has started but didn’t progress as planned (and the minimum required volume is not achieved), the schedule will incorrectly show that the successor activity already could commence.

    Lag Calendars

    The duration of a ‘Time Lag’ depends on the applied calendar.

    Example:

    A 24h lag may mean 1 day with 24*7 calendar and 3 days with 8*5 calendar.

    A lag may have a different calendar from predecessor and successor activities calendars.

    Example:

    Home Renovation project requires to deliver the heavy sofa to a room, after paining of all walls in this room is completed

    • The painting could be performed between 5pm-10 pm (Mon – Sat);
    • The Delivery is only possible between 8am – 12pm (Mon – Fri);
    • It is technically required 16 hours the painting to dry;

    In a schedule, it is going to be “FS+16 hours” dependency, but the predecessor, the successor and the lag calendars are going to be different.

    It is hard to simulate such a scenario in Microsoft Project or Primavera. Below we will review a potential workaround.

    Lags in Scheduling Systems

    Scheduling systems have different levels of maturity and features to manage lag.

     

    Primavera

    Primavera doesn’t support Volume lags and offers four options to manage lag calendars.

     

     

    • Predecessor Activity Calendar,
    • Successor Activity Calendar,
    • 24-Hour Calendar, and
    • Project Default Calendar.

     

    For whatever reason, lag calendars setup has to be done in the ‘Schedule option’ dialogue window. It is a Project characteristic that has to be defined prior the schedule development being commenced, not during the calculation.

    • Unfortunately, users have to choose one of the options that will be applied to the whole schedule. It is not possible to apply a predecessor calendar lag in one part of the schedule and a successor calendar lag in another part of the schedule;
    • When the external schedule is analysed, it is important to know which setting in Primavera was used by the external party. Changes in this setting may result in different activity start and finish dates and even change the Project Critical Path;
    • Primavera users must be extremely careful when opening multiple projects as the Lag Calendar option could be different for each project. This is because all the project options are changed permanently to be the same as the Default Project and therefore some of your projects may not calculate the same way as they did before opening the projects together;
    • Primavera doesn’t provide front-end access to the dependency table and makes it extremely hard to control dependencies with lag and leads. There is possible to generate a report of activities with dependency lags but then users have to manually find and review each activity one by one.

    Microsoft Project

    Microsoft Project doesn’t support Volume lags and doesn’t provide options to manage lag calendars. The lag duration is always calculated on the successor calendar.

    Microsoft Project offers the users an opportunity to use % Time lags. While it may sound like a good workaround to a volume lag, users have to be extremely careful with using this feature. The tool calculates the duration of ‘% lag ’ in a very unusual way. They use a predecessor duration in hours as a 100% base but apply a successor calendar to calculate the actual duration of the lag.

     

    Example:  

    The ‘Activity A’ has the standard (8h*5d) calendar and the ‘Activity B’ has 24 hours calendar. These two activities have SS+50% dependency: 

    It is hard to understand why the lag has the 2 days of duration as it is shown in the ‘Start’ column and the Gannt Chart: 50% of the ‘Activity A’ is 5 days and 50% of the ‘Activity B’ is 24 hours. None of these options gives us 2 days!

    In this case, Microsoft Project calculates the dependency lag as:

    • 50% of predecessor duration in hours: 8h * 10d *50% = 40 hours
    • 24 hours calendar from the successor applied: 16 hours 27/06 (the successor starts at 8am) + 24 hours 28/06. So, the ‘Activity B’ starts at 12am 29/06:

     

    I think it is very confusing and applicable to real-life scenarios only when predecessor, successor and lag calendars are the same.

    Spider Project

    Spider Project supports Time and Volume Lags and allows specifying a calendar for each lag.

    Spider Project provides front-end access to the dependency table including lags attributes and notes.

    Users can add a note that explains the reason for each Lag and identifies changes in lags by adding a column ‘Lag difference from compared schedule’ and applying a filter in the table. 

    Additionally, when Spider Project users apply a filter in the dependency table, the filtered dependencies could be reflected in the Gantt Chart view. For example, a user may want to show only activities with changed lags.

    Schedule Export Issue

    The misalignment in approaches to managing lag calendars in different scheduling tools creates a massive issue when a schedule needs to be exported from one tool to another.
    This is one of the examples of such an issue:

    https://support.oracle.com/knowledge/Oracle%20Cloud/2513929_1.html

    I can offer a workaround, for both Primavera and Microsoft Project users to apply lag calendars correctly. If between predecessor and successor add a technical milestone, it would be possible to specify the milestone’s calendar that represents the lag calendar.

    This workaround should solve the lag calendar issue but adds unnecessary complexity to the schedule and hopefully one day Oracle and Microsoft can add lag calendars to their scheduling tools as already realised in Spider Project.

     

    Points to consider

    • Leads require special attention as in some cases it may result in a reverse logic when a successor activity requires to start (or even finish) before the predecessor activity commenced. 
    • It is possible but not easy to find dependency Lags in Primavera and Microsoft Project schedules. The general consensus of Primavera and Microsoft Project experts is that the use of relationship lag should be minimal if it’s used at all. Such suggestions solve the control issue but limit projects to simulate work as it is going to be performed in real life. Other consultants proposed workarounds to identify activity with lags via reports:

    https://projectcontrolsonline.com/blogs/74-how-to-find-relationship-lag-in-primavera-p6/676-how-to-find-relationship-lag-in-primavera-p6

    Reports are useful but still require a lot of effort. Based on the report users still need to identify each activity manually in the schedule, one by one.  

     

    Replacing lags and Leads with activities

    Another workaround recommended by consultants is replacing Lags/Leads with activities.

    This might be not a bad idea, but it is important to understand that such an approach solves one problem and creates many others. Lags, compared to activities, don’t represent project work, they don’t have associated costs or resources, and should be excluded from project performance. This workaround may impact the calculation of project progress and EVM metrics. It is hard to distinguish real activities from ‘lag activities’. Additionally, this approach required changes to other schedule quality metrics, (‘activities without resources’ as an example).

     

    How to use an activity instead of lead?

    Leads could be replaced by activities through the way of creating a ‘lead activity’ with FF dependency to the predecessor and SS dependency to the successor.

    DCMA 14 Points Schedule Assessment Lead/Lag metrics

    DCMA 14 points Schedule Assessment includes two metrics to control lag and leads.

    Metric 2: Lead
    The DCMA requires no leads (0%) to be used in the schedule.

    Metric 3: Lags
    Total number of activities with lags should not exceed 5% of the activities within the schedule.

    As described above there is very beneficial to use lags and leads as it may help to shorter project delivery. The number of lags/leads in the schedule depends on the nature of the project. 5% is just an indicator to review schedule closely but not an automatic schedule quality issue. I would recommend planners and schedules learn how to control lags and leads, instead of restricting them in the schedule.

    Filters

    It is possible to build a filter to identify activities with predecessor (or successor) “lag” (or lead) in Microsoft Project, but not in Primavera.

     

    Activities with Lags (MS Project)

     

    Activities with Leads (MS Project)

    The ‘Predecessor’ and ‘Predecessor Details’ columns in Primavera don’t show the plus (“+”) symbol for lags. The minus symbol (“-“) for leads is shown, but Primavera in these columns shows activity code (Not activity ID), and the code may contain the minus “-“ symbol.

    Alex Lyaschenko

    PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

    Monte Carlo Simulation Challenges. Simplicity Challenge.

    Monte Carlo Simulation Challenges. Simplicity Challenge.

    Before we start our review of Monte Carlo Simulation (MCS) Challenges, we have to clarify what MCS is and how it is applied in project management.

    Monte Carlo Simulation Method

    A Monte Carlo Simulation is a model used to predict the probability of different outcomes when the intervention of random variables is present. Monte Carlo simulations help to explain the impact of risk and uncertainty in prediction and forecasting models.

    To be useful a model has to be dynamic and represent the true relationships between inputs and outputs.

    In the project management model, inputs and outputs are:

    Model

    Project Delivery Plan used a model for project planning and delivery.

    I don’t like the term “schedule” in the context of MCS. For many project practitioners, this term is associated with Time management and it is only one component of project delivery.

    Inputs

    Apart from the Golden Triangle, which includes:

    • Scope
    • Time
    • Cost

    a project delivery plan also integrates:

    • Risks (including uncertainties)
    • Resources (People, Equipment & Materials)
    • Benefits

    Full integration means that changes in one parameter are reflected in other parameters.

    Outputs

    A Monte Carlo Simulation is performed to identify: 

    • Project risks that require maximum attention
    • Project targets that can be met with sufficient probabilities
    • Contingency reserves that must be created to meet project targets
    • Current probabilities of meeting project targets
    • Resources required for the reliable achievement of project targets
    • Project activities that require maximum attention

    MCS is only one of the methods of Quantitative Risk Analysis (QRA). In this and future posts, if not specified, under QRA I mean the MCS method. If there is any interest, we can discuss other QRA methods later.

    Monte Carlo Simulation Process

    In project management Monte Carlo Simulation works the following way:

    1. Integrated logically driven Project Delivery Plan developed and assessed.
    2. People enter three estimates of initial data that are uncertain (optimistic, most probable and pessimistic) and define what probability distribution each uncertain parameter has.
    3. Risk events are included in the project risk model with their probabilities and impacts.
    4. The software calculates the model and accompanying parameters over and over, each time using a different set of initial data in accordance with their probability distributions. The number of iterations is usually defined by the risk management software user. Usually, this number is measured in thousands of distributions.

    As the result, we get the distributions of possible outcome values.

    This process has one important missing step: “Add corresponding corrective actions”. We will cover it later in a separate post.

    Simplicity Challenge

    Any process may look simple at a high level but to get the desired result detailed instructions, tools and practice are often required. Everyone, who tried to cook knows that cooking is more complex than “Prepare ingredients, mix them, apply temperature processing, and serve”. So, why do we accept the idea that the application of the scientific method in complex and complicated environments could be as simple as: “Prepare model and estimations, add risks and uncertainties, simulate, and report”? There are a few reasons for that:

    1. A burned dish can damage a planned dinner and the reputation of the chef, but decisions made based on misleading project risk analysis are likely to impact someone else, not the person who runs the analysis, or even a project manager who is responsible for project delivery.

    So many times we have been told that communication skill is the most important skill in project management. Now we have many project managers that could perfectly explain why their project is late and over budget, but don’t know how to develop an optimised project delivery plan and run quantitative risk analysis. Adding even a small financial incentive based on achieved results would motivate project managers to learn advanced project management methods, including MCS

    Another problem is a disconnect between time, cost and benefits. Until project delivery plans will not have integrated with planned benefits, this challenge will always be there. Such integration helps to drive project decisions and understand how these decisions impact outcomes and benefits, not just time and cost. MCS also could be used to calculate the distribution of financial (NVP & ROI) and non-financial metrics.

    An example of this kind of integration was discussed in this post:

    https://saluteenterprises.com.au/project-success-criteria/

    2. If companies that promote their MCS tool or QRA training say that MCS requires many hours of effort, no one will buy their tool or service.

    There are four levels in the Conscious Competence Learning Matrix and companies that promote the MCS methods can help us with moving from ‘Unconscious Incompetent’ to ‘Conscious Incompetent’, but reaching the next, ‘Conscious Competent’ level will require much more effort to grow deep knowledge of how the MCS method works, how to develop a reliable project delivery plan and which features in risk analysis tools are important.

    3. Popularity of QRA is growing. More and more companies and projects include requirements to run QRA periodically and even specify that it must be done with the MCS method. Unfortunately, well too often the QRA is performed only to “tick the box” and the result of the QRA doesn’t drive any project decisions.

    There is a simple test to identify misuse of the MCS method. Ask the Project Manager about the current probabilities of meeting project targets, which is one of the outputs of the MCS process.

    4. Precision creates an illusion of accurate estimation. MCS is a perfect tool to play a guesstimation game.

    A knife could be used to cut cooking ingredients or as a weapon. MCS is also used to justify desired or already made decisions, not to calculate required contingencies and identify critical risks and activities. In that case, it is not critical if the MCS method was applied correctly or not, and which tool was used for the simulation.

    The market is full of Monte Carlo Simulation software that primarily serves this purpose. The ability to generate colourful reports seems to be the most critical feature of this tool. If your project or portfolio runs MCS to satisfy someone’s need to say that the analysis is performed or it is performed to justify the desired decision, you may not worry about other MCS challenges.

    On the other hand, for business owners and accountable project managers, the ability to run their own independent project quantitative risk analysis based on a reliable model and risk analysis tool can save money, reputation and even lives. ‘Covid-19’ projects are a great example of it.

    Inaccurate or Misleading?

    Ones Niels Bohr, the Nobel laureate in Physics and a father of the atomic model, said: “It is difficult to make predictions, especially about the future”. Any project model is just a proxy and the result of risk analysis never could be precisely accurate. So, why are we still trying to make predictions then? Because even not 100% accurate forecasts still could be used for our decisions! However, misleading forecasts are dangerous, as they drive wrong decisions.

    The challenge is to understand where to draw a line between “inaccurate” and “misleading” when MCS analysis is performed.

    Probably you have already heard the famous quote of George Box: “All models are wrong, but some models are useful”. I prefer the full version:

    All models are wrong, but some models are useful. So, the question you need to ask is not ‘Is the model true?’ (it never is) but ‘Is the model good enough for this particular application?’

    In the application of project management, we can rephrase it as:

    “Is the Project Delivery Plan good enough to explain the impact of risks and uncertainties by applying the Monte Carlo Simulation method correctly?”

    In the series of future posts, we will try to find the answer to this question.

    Alex Lyaschenko

    PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

    Monte Carlo Simulation Challenges

    Monte Carlo Simulation Challenges

    Risk simulation is becoming popular but most risk simulation tools and the ways how they are used miss some important functionalities that make the results of this simulation unreliable.

    Monte Carlo Simulation – Myth or Reality?

    Last year, I have organised a poll on LinkedIn to understand what project practitioners think about Monte Carlo Risk Simulation:

    The Monte Carlo Simulation Method is the best method for quantitative project risk analysis: Myth or Reality?

    The poll had a lot of attention, and many projects and risk consultants shared their opinions.

    Based on discussions, comments and the final result we can make some conclusions:

    • Monte Carlo Risk Simulation (MCS) is the most recognised quantitative project risk analysis method;
    • There is no common acceptance of this method across project practitioners;
    • Opinion about Monte Carlo Simulation is mostly based on perception rather than knowledge;
    • Majority of planners and risk consultants are mostly not aware of missing critical functionalities in Risk Simulation tools.

    The popularity of MCS in recent years is primarily driven by companies that promote their own Monte Carlo Simulation software or Quantitative Risk Analysis (QRA) training. Unfortunately, well too often they measled project practitioners by telling them that it is easy to apply the method to get a reliable result. This is how one leading American consulting company that sells Quantitative Risk Analysis training attracts their clients:

    “For many, Quantitative Risk Analysis (QRA) is a complex secretive technique, which relies on smoke, mirrors and mathematical trickery. The aim of this webinar is to draw back the curtain and show that QRAs are not that complex and by learning a few basic steps you can apply QRAs to any project to aid in their successful delivery.”

     

    Demonstration of Monte Carlo principles based on the probability distribution of two dices may be good for a teenager, but projects are much more complex and deep knowledge is required to understand how to apply MCS correctly and which tool could be used to get reliable results.

    Based on my research I found that different Monte Carlo Risk Simulation challenges are explained in conference presentations, blogs, White Papers and books but there is no single source where all challenges are collected or explained. I have decided to collect them and present the result at the Project Control Expo conference. 45 minutes is sufficient to cover only some key challenges at a high level. Fortunately, I am not limited by time and space on the Salute Enterprises blog and we could discuss each challenge in detail.

    I am going to write a series of “Monte Carlo Simulation Challenges” posts, and discuss them on LinkedIn. If you are aware of any good source that explains such challenges please share them and join discussions on LinkedIn.

    Alex Lyaschenko

    PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

    Schedule Health Metrics: Activity Constraints

    Schedule Health Metrics: Activity Constraints

    In this article, we continue to review metrics for schedule health analysis. The first part, “Missing Dependencies” was covered here. Now let’s talk about activity constraints.

    There are some project situations that require the use of constraints:

    Delayed Start

    Sometimes a project has a situation when it is impossible to start the activity even if all prerequisite work is completed.

    Example: Project is ready to start testing but the test environment is only available from next week.

    Deadlines

    If a Project has committed dates, we need to understand how the project is performing against these commitments, also known as project deadlines.

    Example: Based on the business case project committed to delivering three Outputs by 1st October, 1st January and 1st of April.

    Locked Dates

    Some projects have locked dates when there is no flexibility to move the start and finish dates.

    Example: The Olympic opening ceremony in Brisbane is locked to 23 July 2032.

    As Timely As Possible

    If there is flexibility when activity could be performed without impacting key delivery dates there are three options to specify the start and finish date:

    • As Soon As Possible (ASAP)
    • As Late As Possible (ALAP)
    • As Timely As possible (ATAP)

    Starting an activity ASAP is usually considered the best approach. In practice, however, starting the activity as soon as it can or leaving it ALAP can both have problems associated with them. ATAP could be achieved with constraints, lags or calendars. Constraints are more visible and often are the best approach.

    Examples:

    • If the project might require delivery through the Caribbean in September, they might want to put in a constraint to delay the trip until October 15, after the height of the Atlantic hurricane season.
    • A union contract may expire on June 3, with the potential for a strike. In that case, the project might want to make sure that certain work gets performed before then.

    How many constraint types are required to simulate these scenarios?

    All possible scenarios in projects could be specified with only two types of constraints:

    • Start No Earlier Than (SNET)
    • Finish No Later Than (FNLT)

    No Earlier Than (NET):  Prevent an activity from being scheduled to start before a specific calendar date.

    No Later Than (NLT): Specify that an activity needs to be finished before a specific calendar date.

    ‘Deadline’ and ‘Locked dates’ events are presented with a milestone. As a milestone always has the same start and end dates there is no need to have additional Finish No Earlier Than (FNET) and Start No Later Than (SNLT) constraints. It just adds complexity to the schedule.

    ‘Locked dates’ and ‘As Timely As Possible’ scenarios could be represented by applying SNET and FNLT constraints simultaneously.

    ASAP and ALAP are actually not constraints, even though they are listed in constraint types in Primavera and Microsoft Project. ASAP and ALAP could be presented as a switch as it is done in Spider Project.  

    Activity constraints are essential to represent real-life project scenarios in schedules correctly. At the same time, incorrect application of constraints may have a negative impact on the management of project reserves or even the project delivery dates. Activity constraints may impact Free Float, Total Float and even Project Critical Path. This impact has to be analysed and accepted. So, we need Schedule Health Metrics to identify the incorrect application of activity constraints. There are some nuances that have to be discussed first.  

    Constraint Terminology

    Scheduling tools use different terms to describe types of constraints.

    As explained above only ‘Finish No Later Than’ and ‘Start No Earlier Than’ are true types of constraints.

    Primavera has 4 types of Start and 4 types of Finish constraints. Not all planners understand the difference between ‘Finish On’ vs ‘Mandatory Finish’ and ‘Start On’ vs ‘Mandatory Start’. The key difference is that ‘Mandatory’ constraints allow violation of schedule logic and ‘On’ constraints are complementary. As the result, it impacts the calculation of Total Float.

    Microsoft Project offers users to decide whether the conflict is allowed:

    Manage Deadlines

    Also scheduling tools have totally different approaches to managing ‘Deadline’ scenarios.

    • Primavera has separate constraints types to simulate ‘Locked Dates’ and ‘Deadlines’ scenarios.
    • MS Project has an additional ‘Deadline’ field for the deadline date det up.
    • Spider Project always calculates Total Float with and without the deadline (NLT) It has additional column fields to present and compare both results.

    These differences have to be reflected in the Constraints Health Check metrics.  

    Hard and Soft Constraints

    Some portfolios use ‘Hard’ and ‘Soft’ terms to divide constraints by softness.

    There is no common agreement on what ‘Hard’ and ‘Soft’ actually mean.

    Some Project Managers assume that ‘Soft’ constraints are ‘nice to have’ constraints and could be deleted if a fast-tracking is required. Others mean that ‘Soft’ constraints refer to resource (not technical) constraints. Arguably, the most popular differential amongst planners is based on a logical priority. ‘Soft’ constraints never violate network logic (but still may generate negative total float) and ‘Hard’ constraints take priority over network logic dependency relationships.

    ‘Hard’ constraints:

    Delay of activity ‘A’ doesn’t push Activity ‘B’. The ‘No Later Than’ constraint followed. Relationship A->B is broken. The project is not logically driven anymore.

    ‘Soft’ constraints:

    Delay of activity ‘A’ push Activity ‘B’. The ‘No Later Than’ constraint is ignored. Relationship A->B is followed. The project is logically driven.

    However, some Primavera planners define ‘Hard’ constraints as ‘Mandatory Finish’ and ‘Mandatory Start’ constraints only and others also add ‘Start On’ and ‘Finish On’ constraints to the definition. Recently, I have seen scheduling standards that in one part of the document define ‘Hard’ as Mandatory Start and Finish only, and in another part of the document it also includes ‘Start on’ and ‘Finish On’. This could be driven by different definitions in international scheduling standards.

    Planning & Scheduling Excellence Guide (PASEG) published by National Defence Industrial Association (NDIA) defines ‘Hard’ constraints based on Primavera terms (or did Primavera developers apply NDIA term?) and the definition includes 2 types only: ‘Must Finish On’ and ‘Must Start On’.

    GAO Schedule Assessment Guide uses Microsoft Project terms (or did Microsoft developers apply GAO terms?) and the definition of the ‘Hard’ constraint includes 4 types: ‘Start No Later Than’, ‘Finish No Later Than’, ‘Must Start On’, and ‘Must Finish On’.

    ‘Start No Later Than’ and ‘Finish No Later Than’ constraints don’t violate the logic but are somehow still defined as ‘Hard’.  

    DCMA 14 Points Schedule Assessment

    DCMA 14 Points Schedule AssessmentDCMA 14 Points Schedule Assessment includes a metric to control the percentage of Hard constraints: “Number of activities with hard or two-way constraints should not exceed 5%”.

    If a scheduled health check includes the terms ‘Hard’ & ‘Soft’, to avoid misunderstanding of the terms there need to be clear definitions established.

    Points to Consider

    • All constraints must be explained. Ideally, the clarification has to be in the schedule but if a scheduling tool doesn’t have a feature to document and export this information a separate register should be created.
    • As each project is different by nature, criticality and management style it is not possible to define one metric that could clarify that the schedule has too many constraints. At the same time, a number of constraints in a schedule is a very good indicator that the schedule could be not logically driven.
    • Some Planners and Master Schedulers fail to appreciate the nuances of constraints and instead restrict their use altogether. Constraints are essential to represent the project work correctly. If constraints are prohibited the schedules could not be used to drive project delivery, only to satisfy someone’s need to say that the project has a schedule.
    • What type of schedule health metric to use depends on scheduling tools. Constraints in Spider Project never violate network logic, so metric is not required to control the quality of schedules. In Primavera and Microsoft Project, incorrect application of constraints may impact the calculation of Critical Path and Total Float, so users of these systems have to learn how the constraints work and decide which types of constraints have to be limited or restricted. They need a schedule quality metric to identify constraints.
    • If an activity already started it doesn’t matter if the start was constrained or not. Also, if an activity is completed it doesn’t matter if the start or finish of the activity was constrained. Such activities should be excluded from metrics.
    • Level of Effort, Hammock and WBS Summary activities should be without constraints. A separate metric should be developed to control that.
    • Primavera and Spider Project allow the setting of two constraints to the same activity while Microsoft Project allows only one. It limits Microsoft Project users to simulate all ATAP scenarios. As a workaround, all constraints (presented by milestones) could be stored under the same WBS. Then related activities are linked with the corresponding milestones. 
    • ‘Locked Dates’ and ‘ATAP’ are actually calendar constraints, not characteristics of activities and ideally have to be realised in the schedule via calendars and not via activity constraint dates. Unfortunately, Primavera is not able to calculate the schedule correctly taking both activity AND resource calendar constraints into account. Planners have to choose between activity OR resource calendars. Microsoft Project and Spider Project don’t have this problem. We will review this challenge in detail in a future post.

    It also simplifies the control of constraints as all constraints outside of the WBS are considered as schedule quality issues. 

     

    • If an activity is left as late as possible, unforeseeable events can occur that make the activity take longer than anticipated, resulting in missed deadlines. So, it is strongly recommended to use pessimistic estimation as a base for all ALAP activities to have sufficient contingency and not delay the project.

    Schedule Health Metrics: Constraints

    Taking all these nuances into the consideration the following Schedule Health Metrics are useful:

    Constrained Activities

    A number and percentage of activities with constraints is a good indicator that could show that the schedule is not logically driven.

    It is possible to develop filters to identify Hard and Soft constraints.

     

    Primavera Hard Constraints Filter

    Primavera Soft Constraints Filter

    Unexplained constraints

    Another Metric could help to identify constraints without an explanation. The method to identify them depends on the way the information is captured.

     

    ALAP activities

    Special attention has to be to all ALAP activities.

     

    Technical and Resource Constraints  

    Apart from DATES constraints, there are other types of activity constraints that also must be taken into the consideration:

    • Could an activity be split?
    • Does the activity need to be performed on the same date (or shift)?
    • How many minimum resources are required to complete the activity?
    • Is it technically possible to add more resources?
    • Would additional resources increase or decrease overall productivity?

     We are going to discuss these types of activity constraints in a future post.

    Alex Lyaschenko

    PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

    Schedule Logic Check Metrics

    Schedule Logic Check Metrics

    One of the fundamental questions for any schedule health assessment is “Is the schedule logically driven?” There are some scheduling metrics that could help planners and schedulers to find the answer to this question. In this post, we will review some of them, that are related to missing dependencies.  

    Missing Dependencies Metrics

    Missing logic metrics is a group of metrics known as:

     

    • missing dependencies,
    • missing predecessors and successors,
    • open start & finish,
    • dangling activities.

    This check is not as simple as many project managers and schedulers might think. In fact, it is actually very hard to answer the question: Are there any missing or incorrect dependencies in the schedule?

    A very popular DCMA-14 Points schedule assessment includes a check of logic (metric N1). Schedulers who are familiar with a scheduling tool (usually Primavera or Microsoft Project) but lacking scheduling knowledge very often apply DCMA-14 “blindly”. Their schedule may have critical logical issues but is reported as a schedule with sufficient quality. I have seen this issue in small business projects and also in large construction programs. So, let’s review what needs to be considered for a comprehensive schedule logic analysis.

    Missing Dependencies?

    There is no metric that could confirm that a schedule has no dependencies missing.

    The fact that each activity (except first and last) has a predecessor and successor doesn’t mean that another successor from/to this activity is not missing. The only way to guarantee that a schedule has correct logic is to implement correct schedule development and maintenance processes and apply control to ensure these processes are followed.

    Each reporting period logic changes have to be analysed, documented and explained.

    While scheduling metrics can’t guarantee that dependencies are not missing, some of them are good indicators that logic has to be checked and, if required, fixed. There are four primary schedule quality metrics used to indicate that a schedule may have a missing dependency:

    ♣   Missing Predecessor

    All ‘Not Completed’ activities except 1st activity and external incoming activities must have a predecessor(s).

    ♣   Missing Successor

    All ‘In Progress’ or ‘Not Started’ activities except the last activity and external outcoming activities must-have a successor(s).

    ♣   Open Start

    Activities where only the predecessor(s) are either ‘Finish-to-Finish’ or Start-to-Finish resulting in an open start to the activity. All ‘Not Completed’ activities except 1st activity and external incoming activities must have at least one ‘Finish-to Start’ or ‘Start-to-Start’ predecessor.

    ♣   Open Finish

    Activities where the only successor(s) are either ‘Start-to-Finish’ or ‘Start-to-Start’ resulting in an open finish to the activity. All ‘In Progress’ or ‘Not Started’ activities except the last activity and external outcoming activities must have at least one ‘Finish-to-Start’ or ‘Finish-to-Finish’ successor.

    There is a number of points that have to be taken into consideration when these metrics are applied:

    • A project may have external dependencies and it is not always possible, and in some cases, not recommended to link schedules from different projects. Milestones representing such dependencies do not have predecessors or successors.
    • A Project may have Level of Effort (LoE), Hammock and WBS activities. Different scheduling tools implement these types of activities differently and this impacts the “missing logic” analysis. 

    –   Hummocks in MS Project are shown without a predecessor and a successor but there is no way to identify “Hammock” type of activities in the system.

    –   “WBS summary” activities in Primavera don’t need a predecessor or a successor. So they have to be excluded from the analysis. Another metric is required to check that “WBS summary” activities don’t have logic as Primavera permits linking activity to “WBS summary” activity.

    –  In Primavera, if all activity successors are linked to LoE or “WBS summary” activities, the activity is actually missing a valid successor. Otherwise, based on the logic, there are no requirements to complete this activity. The same is applicable to predecessors. Such cases could only be identified via a specially developed project report, not via filters.

    • If a milestone has “Open Start” or “Open Finish” it actually doesn’t create any issues with logic. Milestones could be excluded from these metrics.
    • Microsoft Project has a unique feature to link activities with summary tasks. If this technique is applied it is very hard to analyse logic, as some of the activities are driven by logic from activities and some from summary tasks, so it is not recommended. An additional metric is required to identify summary tasks with processors and successors.

    Filters

    Missing logic metrics could be developed via filters in Primavera and Microsoft Project but, as described above, specifics of each tool have to be taken into account. Spider Project already has all these metrics build-in and also allows developing comprehensive additional filters as required.

    Primavera filters to identify activities with missing logic:

    ♣  No Predecessors

    ♣  No Successors

    ♣  Open Finish

    ♣  Open Start

    External Schedule Analysis Tools

    When an external schedule analysis tool is used each metric has to be configured to address explained challenges. For example, Acumen Fuse has a pre-developed “Missing predecessors” metric. However, the metric includes ALL activities with missing predecessors. If an activity already started (or completed) it doesn’t matter if it has a predecessor or not. It is recommended to configure this metric to exclude these activities. Otherwise, the Acumen Fuse Report creates “noise” and may incorrectly show that the schedule has predecessors issues when it actually doesn’t.

    Alex Lyaschenko

    PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma