Project Quiz: Uncertainties and Risks

Project Quiz: Uncertainties and Risks

Project Results depend on activity and resource uncertainties and risk events. However, these project components also impact each other. Let’s demonstrate it by a Project Quiz.

A project has two streams: A->B and C->D, and will be completed when both streams are completed.

The project has activity uncertainties, resource uncertainty and one risk:

Activity uncertainties
The duration of each activity is estimated as a range:
A: 3-5 days
B: 6-8 days
C: 2-3 days
D: 10-11 days

Resource uncertainty
Activities A and C require the same skill, so two resources are needed to perform them in parallel. However, whether the second resource will be available is unclear.

The project has a risk associated with Activity D. Activity D will take one day longer if the risk materialises. There is an opportunity to fully mitigate the risk, but it requires additional expenses.

The goal is to complete the project as soon as possible with the fewest expenses.
Therefore, can you find:

a) Project duration (as a range)?
b) Should the risk be mitigated or not?

In the next posts, we will review and analyse answers.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Doubled Resource Estimated Duration optimisation metric.

Doubled Resource Estimated Duration optimisation metric.

Stephen Devaux proposed some control and optimisation metrics in the Total Project Control book.

One of them, Critical Path Drag, was explained previously, read more in this post: Critical path drag. CP Drag helps to identify activities that directly contribute to project duration. The metric measures the potential to reduce project duration. However, activities with positive or negative CP Drag are good candidates for optimization, but they do not guarantee that the desired optimization is possible.

Resource elasticity

There are many factors that impact activity duration. Critical path method challenge activity duration estimation. One of them is resource elasticity.

Resource elasticity

The ability to change activity duration by changing resource allocation.

Stephen recommends an additional CPM metric, Doubled Resource Estimated Duration (DRED), to measure resource elasticity. DRED is a second duration estimate for establishing a task’s resource elasticity.

It is established by asking the question:

“If Activity X is estimated to take (20?) days with current resource estimates, how long would it take if we doubled (or could double) the resources?”

Some reasonable answers are:

  • 1D (Rare)
  • 10D (Said to be “perfectly resource elastic” — same total resource use, double the daily resources for half the time)
  • 12D (Very common situation — some “bang for the buck”, but not perfect).
  • 17D (Some value, but not huge — requires rigorous analysis of the true cost to be confident in implementing such a change).
  • 20D (Not resource elastic at all).
  • 30D (Negatively resource elastic — the resources will get in each other’s way, like in working in a cockpit).

Stephen recommends capturing the DREDs for each activity in a secondary Duration field and performing true cost analysis on them. Cost analysis includes CP Drag, Drag Cost and true cost computation.


DRED metric has some advantages and downsides.


The question ‘If we can double resource allocation, what gain can we get?’ is straightforward, and the answer gives a good sense of resource elasticity.

Easy to implement.
Creating an additional field to capture additional activity duration is not difficult and can be done with most planning tools

It helps to identify activities with the highest resource elasticity.



Lost effort
It requires upfront identifying DRED for all activities, but most of them (>90%?) will never be used. It is a lost effort.

Resource quantity
Resource quantity is not the only option to reduce activity duration. Other options may include changing the resource calendar, changing resource calendars, removing a material supply bottleneck, assigning resources with better productivity, reducing activity scope, changing process, etc.

Optimistic duration
DRED may not be the optimistic activity duration, and that is what we actually want to know. Activity duration is usually a spectrum between optimistic and pessimistic durations. DRED is somewhere on the curve but we don’t know where.

Monte Carlo Analysis
It can’t be used for Monte Carlo Simulation Analysis. MCS requires optimistic, expected and pessimistic estimations.

Triple estimation
What if it is possible to triple allocation? Should we also have the TRED metric? What if a resource with half capacity is available? Could such a resource be useful?

Resource availability
DRED is CP Drag’s complementary metric, but both metrics are not sufficient for optimisation. Practical optimisation depends on resource availability.

Freed-up resources
For schedule optimisation, it is important to understand if an activity can be performed with fewer resources, as freed-up resources can be used to accelerate CP Drag activities. DRED doesn’t give us this vital information.

Alternative approach

An alternative to DRED is Optimistic Activity Duration, or Crash Durations (original CPM method) Original critical path method and beyond.
While Activity DREDs is simpler, Optimistic Activity Duration is more accurate and usable.

There are four options to identify Optimistic Activity Duration:

Collect optimistic durations from SMEs.

Use collected statistical data.

Use primary uncertainties
Calculate Optimistic activity Durations based on primary uncertainties (volume of work, resource productivity, calendars) as 3-point estimations and technological constraints (max possible assignment quantity, min required workload, etc.).

Statistical primary uncertainties
The same as above but use statistical primary uncertainties.

Resource Supply limits

Optimistic activity duration has theoretical and practical values. 

It is important to know the theoretical duration, which is based on technological limitations only, and the feasible duration, which also considers resource supply constraints.


The main idea of DRED is to develop a proxy to complement CP Drag by analysing activity durations with double resource assignment. DRED is easy to understand and use but it needs to be applied carefully. An optimistic activity duration may be different from DRED.

An alternative approach is to identify 3-point activity estimate. It requires more effort but gives more accurate data for analysis and optimisation.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Overlap activities with a positive lag challenge

Overlap activities with a positive lag challenge

I’ve heard from some planning consultants that negative lags (leads) must be prohibited and replaced with positive lags from a predecessor activity. While this workaround has certain benefits it is not the universal way for managing activity overlaps. It’s essential to also consider the downsides as well.

In general, activity overlaps are beneficial as they accelerate project delivery. However, creating and managing overlaps increases the delivery model’s complexity. Projects can be delivered quicker and with less funds if the complexity can be effectively managed.

There are three possible ways to simulate activity overlaps in a project delivery model:

  • FSNL: Use Finish-to-Start dependencies with negative lags (leads)
  • SSPL: Use Start-to-Start dependencies with a positive lag
  • Apply artificial split of the predecessor activity

Each option carries benefits and downsides.

In this post Artificial activity split problem. Resource levelling challenge, we reviewed the downsides of an artificial split. Now, let’s discuss SS dependencies with a positive lag option.

In this example ‘Activity D’ could/should/must start 1 day before the milestone.

Currently, the milestone date is driven by the completion of Activity C. So, replacing the FS-1d dependency with SS+3d lag is possible. Each activity and milestone’s start and end dates will be the same.

The SSPL workaround addresses some problems but also creates other planning simulation issues.


There are reasons why planners may want to use SSPL instead of SSNL.

Benefit 1
For some people, it is easier to imagine a delay than overlap.

Benefit 2
Negative lags can be used to hide project delays, and it is easy to prohibit them completely rather than to identify such cases.

Benefit 3.
Situations when the start of the successor’s activity is before the start of the predecessor’s activity look like anomalies.

The start date of Activity D is before the start date of the Milestone. Practically, there is no anomaly in this example as the milestone is not an activity (just a logical connector), and the start date of Activity D is later than the start dates of all predecessor activities (A, B, and C).

However, the logical anomaly is possible in other scenarios and must be addressed.

Benefit 4.
An overlap may be linked to achieving the volume from the predecessor activity.
For example, when 80% of volume activity C is achieved, the successor can start. In this case, the SS + lag dependency is the correct representation of logic. However, to simulate it correctly, activity and lag need to be volume-based, not duration-based. 80% of duration may not be the same as 80% of volume.

Unfortunately, many popular planning tools don’t support volume lag. Read more in the post: Volume lags


The SSPL approach has downsides.

Problem N1
The duration of positive lag (SS+ 3d) has to be aligned with the predecessor activity duration (4 days). There are MANY reasons why the duration of Activity C may change: change in activity or resource calendars, change in resource assignment, clarified volume of work, etc. Regardless of the change, the duration of the lag must be revised and updated. It may not be easy to identify which schedule changes impact lag duration.

Problem N2
The Milestone has non-driven predecessor activities (A and B). If any of them are delayed and push the milestone, it should also push the start of Activity D. However, as it is linked to Activity C only, the schedule incorrectly shows that Activity D can start as planned. As a workaround Activity D may have SS + lag logic to each predecessor activity but it even further increases complexity due to the problem N1.

Problem N3
If Activity C has a delay after it commences, it will push the milestone but not Activity D.

Problem N4
The SS + lag logic actually doesn’t address Benefit N2. Projects can still hide delays with the SS + lag by not updating the duration of lag correctly.

Problem N5
This workaround only works if the lag calendar is the same as the predecessor calendar. Popular planning tools have limited capability to configure lag calendars. Microsoft Project applies successor calendar. Primavera allows configuring the lag calendar, but configuration can only be done for the whole project, and it could be scenarios when it lag calendar need to be associated with successor duration, not predecessor.

Problem N6
The above problems may impact the result of the Monte Carlo Simulation Analysis. Read more on the post: Artificial activity split problem. Resource levelling challenge.


Activity overlaps are beneficial but must be correctly simulated in the project delivery model.

Application of the ‘SS + lag’ logic is relevant for some scenarios, but it is not a universal way to represent activity overlaps. This approach has downsides that downgrade potential benefits.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Resource levelling challenges. Time optimisation.

Resource levelling challenges. Time optimisation.

A recent LinkedIn poll and discussion revealed that planners/schedulers rarely use automatic resource levelling. They prefer to level resources manually or leave the resource constraint challenge to someone else. As some of you have expressed curiosity regarding the poll results, I have decided to expand on this topic in more detail and explain resource levelling challenges.

The resource-constrained project scheduling problem

The project resource levelling challenge is known as the resource-constrained project scheduling problem (RCPSP).

RCPSP is a classic optimization problem in project management. In this problem, the objective is to schedule a set of tasks within a specified time frame while considering the limited availability of resources.

Projects’ optimised delivery is achieved when the optimum use is made of the critical resources and the flow of work is harmonised. This was the primary focus of Henry Gantt’s work when he developed his famous Gantt chart and it was also the mission of Kelley and Walker when they developed the original CPM system. Not long after, the Program Evaluation and Review Technique (PERT) and the Critical Path Method (CPM) were invented, RCPSP surfaced, capturing the worldwide attention of researchers worldwide due to its broad practical relevance and intriguing combinatorial complexity. RCPSP takes into account both tasks (activities) dependencies and limited availability of resources to execute a project.

In 1975, Professor Leonid Kantorovich, USSR, and Professor Tjalling C. Koopmans, USA, received the Nobel Prize in Economics for their contributions to the theory of optimum allocation of resources.

A search on The European Journal of Operational Research yields 741 articles dedicated to RCPSP, 257 of which were published in the last 5 years. The high volume of articles dedicated to RCPSP doesn’t relate to Generative AI breakthroughs. Unfortunately, recent exponential progress in Generative AI doesn’t help to progress with the resource-constrained project scheduling problem, as planning is a different domain of human and artificial intelligence.

At the same time, the first AI-based systems that help optimise resource constraint schedules were developed in the 80th. Some of the planning tools today could also be used for RCPSP, but as we can see below, not all of them have sufficient quality algorithms.

The history of levelling algorithms is a fascinating story that remains ongoing and warrants a dedicated post.

RCPSP demonstration

To demonstrate the challenge, I have developed a simple scheduling fragment. The schedule has nine activities grouped into three streams, and one full-time skilled resource is required for each activity.

Three types (skills) of resources (Team A, Team B, and Team C) are needed to deliver this work.

Let’s analyse resource demand:

The project needs three ‘Team A’, two ‘Team B’, and two ‘Team C’. If they are available, the final milestone will be delivered in 120 days. The critical path is Activity 1, Activity 2, and Activity 3.

However, the developed schedule is feasible only if the required demand is available.

Let’s assume that only one ‘Team A’, one ‘Team B’, and one ‘Team C’ are available.

To be able to develop a feasible schedule, additional resource dependencies (resource logic) need to be identified. We aim to find a combination of resource dependencies that give us the shortest overall project duration. We will discuss RCPSP with other optimisation criteria in future posts.

There are three ways to perform levelling:

  • Automatic: use a built-in scheduling algorithm
  • Manual: level resources manually
  • Semi-automatic: use an automatic scheduling algorithm but manually configure adding priorities to point algorithms in the right direction.

There are multiple ways to sequence activities, and we must try them all to find the optimal solution. As the example is relatively simple, it is possible to do so. Still, it will be a time-consuming exercise. Real-life schedules have more activities and resources. The overall number of combinations depends on the number of activities and resources and grows exponentially, so even with very powerful computers, it is impossible to find the results of all combinations for even relatively small scheduling fragments.

Automatic resource levelling

Microsoft Project, Primavera P6 and Spider Project can perform resource levelling. Microsoft Project and Primavera have one optimisation method, and Spider Project has five methods.  

Option 1

Microsoft Project, Primavera and Standard method in Spider Project gave us the same result: 168 days.

Microsoft Project


Spider Project

The solution calculated in Spider Project and Primavera is identical. Let’s analyse it:

  • Almost all activities are on the resource-critical path;
  • Activities from the original critical path (A1, A2, A3) are still critical;
  • Team A and Team C don’t have idle time:
  • Four resource dependencies added:

Option 2:

Alternative levelling methods (‘Optimisation’ or ‘Optimisation Plus’ in Spider Project) found an alternative way to order activities:

  • This solution has fewer activities in the resource-critical path. Three of them have positive total float;
  • Activities from original critical path (A1, A2, A3) are still critical;
  • ‘Team A’ and ‘Team B’ have idle time:
  • Three resource dependencies added:

This option requires only 144 days for completion. It is on 24 days (14%) shorter than the Option 1.

Does this mean that the ‘Option 2’ also requires fewer funds? It is logical to assume that quicker should mean cheaper, but until we calculate it, we can’t be sure. We will test this hypothesis in future posts.  

Manual levelling

I asked my LinkedIn community to try to perform resource levelling manually, provide results, and advise on the time they spend on the optimization. Almost all participants found the 144-day solution and advised that it takes about 15 minutes to find the solution.

This scheduling fragment is actually VERY easy to level. As the first activity in the second stream (Activity B1) has a much longer duration than activities A1 and C1, it looks logical that starting from Activity B1 is not the best idea. Then there are only two options left: start from A1->C1 or C1->A1. It reduces the number of starting options from six to two.

One the options (C1->A1->B1) gives a better result. Additionally, it levels Team B.

Now, all we need to do is delay Activity B3. It gives us the 144-day result.

However, accomplishing this task proves challenging for standard built-in levelling algorithms. Microsoft Project applied A1 -> C1 -> B1, Primavera and Spider Project applied A1 -> B1 -> C1 solutions to start. The Optimization (Spider Project) method recognizes that starting from activity C1 yields the optimal solution.

 Automatic vs Manual levelling

Through this experiment, I have tried to demonstrate that even a simple scenario could be too complex for a planning tool. Participants in the experiment discovered a better solution compared to those automatically calculated by Microsoft Project or Primavera. Experienced planners who use these systems know that automatic resource levelling is a ‘never touch’ function, as the duration of the schedule is likely to be far from optimised.

At the same time, manual resoucre levelling is a practical option. It can be applied as an exception, not the rule. Exceptions are related to linear projects with one obvious critical (but limited) skill or only for small scheduling fragments. Still, for cost optimization all resources need to be optimised, not only those that impact critical path.

In all other circumstances, manual resource levelling is not a viable option. As previously mentioned, the challenge lies in the exponential growth of combinations, rendering manual levelling impractical for broader scenarios.

If we add only three new activities to the reviewed example, Activity A4, Activity B4, and Activity C4 (Team D), it takes much longer to find an optimised solution manually.

Resource volatility

Another reason why manual levelling is not a practical option is resource volatility. Conducting a one-time optimisation is not sufficient, as resource demand and supply are subject to constant fluctuations.

Demand is volatile due to variance in performance and accuracy in planning. Even if the required resource quantity is stable, time when resources are needed are likely to be different from the original plan. Supply is volatile due to demand from other activities (or even projects) and unplanned changes (broken equipment, sick leave, etc).

Project Delivery optimisation must be performed after each reporting cycle to ensure the project’s plan is still optimised and feasible.

In our example, altering the calendar of one resource may trigger a change in resource logic.


When project conditions are changed to perform new levelling three steps need to be performed:

  • Remove all resource dependencies;
  • Find a new optimisation solution;
  • Add new resource dependencies.

Each step is time-consuming, especially when a planning tool doesn’t provide an opportunity to classify dependencies. This obvious feature is missing in Microsoft Project and Primavera P6.

It is a big help when a planning tool can perform these steps automatically. 

Optimised vs near-optimised

Once the result is calculated, we can not be certain if the proposed option is the best possible solution. To confirm this, we would need to examine all possible combinations. Even advanced RCPSP AI-powered systems can only identify near-optimized solutions, meaning that there is no guarantee that a better solution doesn’t exist. However, if a proposed solution is better than the project team that can find it manually, it is beneficial to use it.


  • The resource-constrained project scheduling problem (RCPSP) presents a complex mathematical challenge. While scientists continuously generate ideas to tackle this challenge, many of these ideas do not find practical implementation yet.
  • Different software packages produce different results for the same projects when resources are limited. Popular scheduling tools often fall short in effectively optimizing resource-constrained schedules. 
  • Resource volatility adds another layer of complexity, requiring ongoing optimisation to adapt to fluctuating demand and supply.
  • A recent LinkedIn poll highlighted the infrequent use of automatic resource levelling by planners/schedulers, who often opt for manual methods or delegate the task altogether.
  • The software that calculates the shortest resource-constrained schedules may save a fortune for its users. 

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Quantitative Risk Analysis vs Risk Matrix

Quantitative Risk Analysis vs Risk Matrix

The current Global Project Risk Management maturity level could be described as a stage in which the risk management community has already learned the Risk Matrix (RM) downsides but still hasn’t learned quantitative risk analysis (QRA) pitfalls.

Again and again, project risk consultants identify the downsides of the project RM tool and recommend QRA as the better alternative. Unconsciously, they compare the negative sides of RM with the positive sides of QRA.

Monte Carlo Simulation, as the most recognised QRA method, is becoming more popular but most risk simulation tools and the ways how they are applied have some important functionalities missing. This makes the results of this simulation unreliable.

The next stage will be the discovery of QRA downsides.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma