## Monte Carlo Simulation Challenges. Picture-based simulation.

Timelines

This is a picture:

And this is a picture:

And this is a picture:

The only difference between these pictures is the tool used to draw them: highlighters, PowerPoint or a Gantt Chart generator. The value of these pictures is exactly the same.

Pictures with uncertainties
If this project has duration and cost uncertainties, it is possible to draw other pictures.

Probability Distribution:

Scatter Diagram:

It is NOT the Monte Carlo Simulation Method applied to forecast project delivery, taking project uncertainties into account.

If the base for the risk simulation is a picture, the result is just another picture.

Monte Carlo Simulation process

In project management Monte Carlo Simulation has to be applied the following way:

• Integrated logically driven dynamic Project Delivery Plan developed and assessed;
• People enter three estimates of initial data that are uncertain (optimistic, most probable and pessimistic) and define what probability distribution each uncertain parameter has;
• Risk events are included in the project risk model with their probabilities and impacts;
• Corresponding corrective actions added if target achievements are endangered;
• The software calculates the model and accompanying parameters over and over, each time using a different set of initial data in accordance with their probability distributions. The number of iterations is usually defined by the risk management software user. Usually, this number is measured in thousands of iterations.

As the result, we get the distributions of possible outcome values.

Top-down and bottom-up estimations

There are two main project estimation methods: top-down and bottom-up. The top-down is extremely inaccurate but quick. The bottom-up is slower but has a higher accuracy.

The above pictures were drawn based on Top-Down estimations and show how long each project phase will likely take.

The top-down approach doesn’t allow understand how inputs (activity uncertainties and project risks) correlate with outputs (duration and cost probability distributions), and such correlations are critical for reliable analysis. The Monte Carlo Simulation applied to Top-Down estimations doesn’t simulate POSSIBLE outputs. It just narrows the range of uncertainties.

Drawing risk distribution pictures in a complex way with the scientific method may create the impression of accuracy, but the reliability of such predictions is extremely low. Usually, it is so low that the P80 prediction has zero chance of being achieved.

If a project Risk Simulation tool only works with top-down estimations, it can be used to draw nice pictures to justify the desired results but can’t calculate reliable probabilities.

#### Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

## Critical Path Method was not for time optimisation

Some time ago, I posted a poll on LinkedIn asking about the origins of the CPM method. Here’s what the respondents said:

• 56% believe the method is all about making schedules efficient, focusing on time.
• 4% think it was originally created to save costs.
• 39% believe it was meant to optimise both time and costs.

I want to thank everyone who commented and shared links to websites and papers. Those resources are definitely worth checking out!

Certain planners have argued that optimising time inherently leads to cost optimisation as well. This assumption, however, may be inaccurate and could contribute to the higher project costs. It might be worthwhile to have a separate discussion to determine whether the common saying “Time is money” holds true in the context of projects.

The initial objective of the original CPM model was to identify the lowest project cost across various potential project durations. Achieving this goal necessitated a means of balancing both project time and cost. In my view, both responses, “Cost optimisation” and “Time and Cost optimisation,” are valid.

Research Article: “Sixty years of project planning: history and future” (M. Hajdu and S. Isaac).

The development of the CPM technique started in 1956, when the management of DuPont decided to utilize their UNIVAC 1 computer to support the maintenance work of their production plants. The management of the company wanted to prove that IT is the future, and that the money they had spent on the computer was not in vain. DuPont’s management thought that using the computer for planning and cost optimization was an excellent way to prove its utility. Morgan Walker, an engineer at DuPont, got the assignment of figuring out whether UNIVAC could be used for solving such problems.”

Since the 1960s, numerous papers on the Critical Path Method (CPM) have been published. While some proposals address specific situations, we still lack a comprehensive method to calculate schedules that are optimised for cost. When the Critical Path Method was first developed, the challenge of balancing time and cost was too complex for the existing computers and algorithms of that time to handle effectively.

Today, however, computers have become significantly more powerful, and AI solutions are reshaping different aspects of our life. Despite these advancements, there has been limited progress in solving the challenge of cost-optimised scheduling.

In upcoming posts, we review why the challenge is so complex, and we can discuss if it could be solved in the near future.

#### Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

## Artificial Activity Split Problem. Resource Levelling Challenge

In previous discussions, we explored the benefits of employing Volume Lag and Point-to-Point dependencies for emulating activities that shift in parallel. However, due to the limited support for these features in many planning tools, some planners have suggested the utilisation of artificial activity splits as an alternative. It enables the application of Finish-to-Start dependencies without introducing lags between parallel activities. While this workaround resolves one issue, it potentially creates others.

Artificial Activity Split may cause two planning problems:

1. Splitting of activities that must be performed without interruption into separate segments can end in undesired results after resource levelling.
2. If parallel activities have duration uncertainties, Artificial Activity Split makes the result of the Monte Carlo Simulation unreliable.

Resource levelling Challenge

Let’s review a scheduling fragment with six activities. All dependencies are Finish-to-Start.

Activities A and B have a volume of 80 units (10 units per day). Activity B can run in parallel with activity A after 40 units are achieved. There are different ways to simulate such a scenario. Ideally, if a planning tool supports ‘volume scheduling’ and ‘point-to-point dependencies’, multiple point-to-point dependencies could be applied:

Start-Start (40v, 0v) and Start-Start (80v, 40v).

If a planning tool only supports duration lags, it could be simulated with two dependencies:

• Start-Start + Lag (4d) and Finish-to-Finish + Lag 4 days. Technically it takes 15 days to deliver the work package.
• This fragment requires the same ‘Resource 1’ for Activities A and D for the same days (Day 3-5). So, if there is only one ‘Resource 1’, resource levelling is required.

With limited resources, it takes 20 days to deliver the work package.

Now let’s review the same example but with Artificial Activity Split applied.

Activities A and B are split into two 4-days activities. It allows the application of Finish-to-Start dependencies without lags. Technologically activities (A1 and A2), (B1 and B2) must be performed without interruption. Due to the resource constraint (same as above), resource levelling is required.

A good levelling algorithm can find an opportunity to perform activity D ahead of activity A2 and deliver the fragment in 18 days instead of 20 days! This schedule meets all the specified planning conditions, but practically it is not feasible since activities A and B must be carried out without interruptions.

Splitting activities that must be performed without interruptions into separate segments is not a good idea as it can lead to undesirable results when resources are limited and resource levelling is required.

The second challenge associated with Monte Carlo Simulation we will review in a separate post.

#### Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

## Indirect Resource Dependency

In the previous posts, we reviewed Volume Lags and Point-to-Point dependencies to demonstrate that assigned resources may impact activity and lag durations. This is also an excellent example to demonstrate indirect resource dependencies.

We reviewed a scheduling fragment with two activities and one dependency. It is easy to imagine real examples when activity A performs preparation for activity B and must be continuously ahead with defined Volume: 20m, 10m3, 10 items, etc.

Technologically, without considering resource requirements, this example looks SIMPLE. Resource assignments add dimensions to the complexity of planning and execution of this fragment.

Practically to be able to complete this work, it is essential to know:

Demand
• Volume of Work (We know that already: both activities are 100m)
• Technical dependency constraints (We know that already: SS + lag & FF + lag)
• Volume lag (We know that already: 40m)
• Required skills (equipment, people)
• Required materials (if any)
Technological resource constraints: minimum & maximum quantity, minimum % workload, teams, shifts, etc.
• Space constraints (if any)
• Activity Calendar constraints (if any)

Supply
• Resource quantity limits
• Resource Productivity Rates (For each skill)
• Resource Calendar Constraints
• Cost components (If cost has a high priority in planning this work)

Resource Dependencies
• Direct resource dependencies
• Indirect resource dependencies

These parameters and correlations between them are relatively easy to understand, document and add to the project delivery model. Of cause, if the project delivery tool allows that. All except Resource Dependencies. Direct Dependencies add a new layer of complexity, indirect another one. Let’s review how.

Scenario 1: (Without Resource Dependency)
• Activities A & B require the same skill S1
• Resources (R1) with skill S1 have productivity of 2.5 m/h
Two identical resources are available
• Each activity requires a minimum of one resource
• Maximum one resource could be assigned to each activity
• Activities & Resources have 5d*8h calendar

Work takes 7 days, and there is no opportunity to accelerate delivery.

Scenario 2 (Direct Resource Dependency)
The same as above, except:
• Only one resource is available

Work takes 10 days, and there is no opportunity to accelerate delivery.

Scenario 3 (Indirect Resource Dependency)
Activity A
• Activities A requires skill S1
• Activity A requires a minimum of one resource
• Maximum one resource could be assigned
• Two resources with skill S1 have productivity of 2.5m/h (R1) and 5m/h (R2)

Activity B
• Activities B requires skill S2
• Activity A requires a minimum of one resource
• Maximum two resources could be assigned
• All resources with skill S2 have same productivity of 2.5m/h (R3)

• Activities & Resources have 5d*8h calendar

Scenario 3a
Resource R1 and one resource R3 are assigned to activities A & B.

Work takes 7 days, the same as in scenario 1. Is it an opportunity to accelerate delivery now?

Scenario 3b
Resource R2 is twice more productive as resource R1. Let’s replace them!

While the duration of activity ‘A’ is reduced to 2.5 days, the overall duration is reduced by 1 day only. This is due to the reduced duration of the SS lag. Refer to Volume Lag post for details.

Scenario 3c
The previous scenario doesn’t give us the expected result. Let’s add additional resource to activity ‘B’ instead.

While the duration of activity ‘B’ is reduced from 5 to 2.5 days, the overall duration of work hasn’t changed. It is still 7 days. This is due to the required FF lag between activities.

Scenario 3d
The additinal resoucre assigned to activity B also doesn’t give us the expected result. What if we combine scenarios 3b and 3c: replace R1 with R2 and add additional R3 resource?

It gives acceleration bigger than two scenarios applied separately (one day in scenario 3b and zero days in scenario 3c)! Appling together, the duration of work reduces to 3.5 days!

Yes! 1+0=3.5

Project planning is more complex than simple math.

Visible Resource Dependencies

These scenarios demonstrate that technological resource requirements and supply constraints impact the schedule in several ways:
• R1 & R2 resources have the same skills with different productivity rates. It impacts the duration of activity A (scenario 3b).
• R3 resources have the same productivity rate. Activity B can be performed with a different quantity of R3 resources, It impacts the duration of Activity B (scenario 3c).
• The resource constraint (scenario 2) doesn’t impact the activity durations but creates additional Resource Dependency.

There are all visible dependencies that are relatively easy to identify and analyse. Relatively, as we only analysed the impact of two activities without the context of the overall project. If analysed resources are required to perform other activities at the same time when this work is planned, it may trigger further changes.

Hidden Resource Dependencies
In scenario 3d we identified indirect dependency. There is no visible dependency between Resources R1/R2 and R3. These resources have different skills and are assigned to different activities.
However, work was significantly accelerated only when the correlation between these resources was identified. These resources are dependent indirectly.

Project acceleration
Resources may have direct and indirect dependencies with other resources in the project in many other ways. A small change in one assignment may trigger a ‘domino’ effect and cause significant delay. This is one of the main reasons why projects are late.

However, the effect works in both ways! The project may have an acceleration opportunity through direct and indirect resouce dependencies, but it is not easy to identify them manually. Applying a project delivery tool with advanced scheduling methods and good resource levelling algorithms could significantly accelerate project delivery and reduce project cost.

#### Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

## Point-to-Point Dependency

In the Previous Post, we discuss the advantages of volume lag.

For example, when activity A performs preparation for activity B and: 20m, 10m3, 10 items, etc.

The proposed solution was to use: Start-to-Start + Volume lag.

The ‘Volume lag’ eliminates the problem when activity B can’t commence on time due to lower than expected progress of activity A. However, there is one more challenge: activity B not only has to start when the required volume is achieved, but also this Volume must be continuously achieved ahead of activity B.

As a quick fix, both activities may have an additional Finish-To-Finish + Volume lag dependency:

It guarantees that the completion of Activity B is ahead of Activity A but doesn’t fix the problem. If the progress of Activity A may not be stable project delivery model must reflect it. The proper solution is to apply multiple Point-to-Point Dependencies to implement additional checks.

A ‘Lag’ is a shift from the Start (or Finish) date of the Predecessor activity.

Point-to-Point dependency represents shifts from a predecessor point AND a successor point simultaneously.

Each P2P dependency has two lags: Lag Out (traditional) and Lag In:

Where:
• Lag 1 – predecessor lag
• Lag 2 – successor lag
• Both lags could be Duration or Volume based
• Volume could be defined in units or as Volume %

#### Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma