Doubled Resource Estimated Duration optimisation metric.

Doubled Resource Estimated Duration optimisation metric.

Stephen Devaux proposed some control and optimisation metrics in the Total Project Control book.

One of them, Critical Path Drag, was explained previously, read more in this post: Critical path drag. CP Drag helps to identify activities that directly contribute to project duration. The metric measures the potential to reduce project duration. However, activities with positive or negative CP Drag are good candidates for optimization, but they do not guarantee that the desired optimization is possible.

Resource elasticity

There are many factors that impact activity duration. Critical path method challenge activity duration estimation. One of them is resource elasticity.

Resource elasticity

The ability to change activity duration by changing resource allocation.

Stephen recommends an additional CPM metric, Doubled Resource Estimated Duration (DRED), to measure resource elasticity. DRED is a second duration estimate for establishing a task’s resource elasticity.

It is established by asking the question:

“If Activity X is estimated to take (20?) days with current resource estimates, how long would it take if we doubled (or could double) the resources?”

Some reasonable answers are:

  • 1D (Rare)
  • 10D (Said to be “perfectly resource elastic” — same total resource use, double the daily resources for half the time)
  • 12D (Very common situation — some “bang for the buck”, but not perfect).
  • 17D (Some value, but not huge — requires rigorous analysis of the true cost to be confident in implementing such a change).
  • 20D (Not resource elastic at all).
  • 30D (Negatively resource elastic — the resources will get in each other’s way, like in working in a cockpit).

Stephen recommends capturing the DREDs for each activity in a secondary Duration field and performing true cost analysis on them. Cost analysis includes CP Drag, Drag Cost and true cost computation.

PROS and CONS

DRED metric has some advantages and downsides.

PROS:

Simplicity
The question ‘If we can double resource allocation, what gain can we get?’ is straightforward, and the answer gives a good sense of resource elasticity.

Easy to implement.
Creating an additional field to capture additional activity duration is not difficult and can be done with most planning tools

Informative
It helps to identify activities with the highest resource elasticity.

 

CONS:

Lost effort
It requires upfront identifying DRED for all activities, but most of them (>90%?) will never be used. It is a lost effort.

Resource quantity
Resource quantity is not the only option to reduce activity duration. Other options may include changing the resource calendar, changing resource calendars, removing a material supply bottleneck, assigning resources with better productivity, reducing activity scope, changing process, etc.

Optimistic duration
DRED may not be the optimistic activity duration, and that is what we actually want to know. Activity duration is usually a spectrum between optimistic and pessimistic durations. DRED is somewhere on the curve but we don’t know where.

Monte Carlo Analysis
It can’t be used for Monte Carlo Simulation Analysis. MCS requires optimistic, expected and pessimistic estimations.

Triple estimation
What if it is possible to triple allocation? Should we also have the TRED metric? What if a resource with half capacity is available? Could such a resource be useful?

Resource availability
DRED is CP Drag’s complementary metric, but both metrics are not sufficient for optimisation. Practical optimisation depends on resource availability.

Freed-up resources
For schedule optimisation, it is important to understand if an activity can be performed with fewer resources, as freed-up resources can be used to accelerate CP Drag activities. DRED doesn’t give us this vital information.

Alternative approach

An alternative to DRED is Optimistic Activity Duration, or Crash Durations (original CPM method) Original critical path method and beyond.
While Activity DREDs is simpler, Optimistic Activity Duration is more accurate and usable.

There are four options to identify Optimistic Activity Duration:

Subjective
Collect optimistic durations from SMEs.

Statistical
Use collected statistical data.

Use primary uncertainties
Calculate Optimistic activity Durations based on primary uncertainties (volume of work, resource productivity, calendars) as 3-point estimations and technological constraints (max possible assignment quantity, min required workload, etc.).

Statistical primary uncertainties
The same as above but use statistical primary uncertainties.

Resource Supply limits

Optimistic activity duration has theoretical and practical values. 

It is important to know the theoretical duration, which is based on technological limitations only, and the feasible duration, which also considers resource supply constraints.

Summary

The main idea of DRED is to develop a proxy to complement CP Drag by analysing activity durations with double resource assignment. DRED is easy to understand and use but it needs to be applied carefully. An optimistic activity duration may be different from DRED.

An alternative approach is to identify 3-point activity estimate. It requires more effort but gives more accurate data for analysis and optimisation.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Overlap activities with a positive lag challenge

Overlap activities with a positive lag challenge

I’ve heard from some planning consultants that negative lags (leads) must be prohibited and replaced with positive lags from a predecessor activity. While this workaround has certain benefits it is not the universal way for managing activity overlaps. It’s essential to also consider the downsides as well.

In general, activity overlaps are beneficial as they accelerate project delivery. However, creating and managing overlaps increases the delivery model’s complexity. Projects can be delivered quicker and with less funds if the complexity can be effectively managed.

There are three possible ways to simulate activity overlaps in a project delivery model:

  • FSNL: Use Finish-to-Start dependencies with negative lags (leads)
  • SSPL: Use Start-to-Start dependencies with a positive lag
  • Apply artificial split of the predecessor activity

Each option carries benefits and downsides.

In this post Artificial activity split problem. Resource levelling challenge, we reviewed the downsides of an artificial split. Now, let’s discuss SS dependencies with a positive lag option.

In this example ‘Activity D’ could/should/must start 1 day before the milestone.

Currently, the milestone date is driven by the completion of Activity C. So, replacing the FS-1d dependency with SS+3d lag is possible. Each activity and milestone’s start and end dates will be the same.

The SSPL workaround addresses some problems but also creates other planning simulation issues.

Advantages

There are reasons why planners may want to use SSPL instead of SSNL.

Benefit 1
For some people, it is easier to imagine a delay than overlap.

Benefit 2
Negative lags can be used to hide project delays, and it is easy to prohibit them completely rather than to identify such cases.

Benefit 3.
Situations when the start of the successor’s activity is before the start of the predecessor’s activity look like anomalies.

The start date of Activity D is before the start date of the Milestone. Practically, there is no anomaly in this example as the milestone is not an activity (just a logical connector), and the start date of Activity D is later than the start dates of all predecessor activities (A, B, and C).

However, the logical anomaly is possible in other scenarios and must be addressed.

Benefit 4.
An overlap may be linked to achieving the volume from the predecessor activity.
For example, when 80% of volume activity C is achieved, the successor can start. In this case, the SS + lag dependency is the correct representation of logic. However, to simulate it correctly, activity and lag need to be volume-based, not duration-based. 80% of duration may not be the same as 80% of volume.

Unfortunately, many popular planning tools don’t support volume lag. Read more in the post: Volume lags

Disadvantages

The SSPL approach has downsides.

Problem N1
The duration of positive lag (SS+ 3d) has to be aligned with the predecessor activity duration (4 days). There are MANY reasons why the duration of Activity C may change: change in activity or resource calendars, change in resource assignment, clarified volume of work, etc. Regardless of the change, the duration of the lag must be revised and updated. It may not be easy to identify which schedule changes impact lag duration.

Problem N2
The Milestone has non-driven predecessor activities (A and B). If any of them are delayed and push the milestone, it should also push the start of Activity D. However, as it is linked to Activity C only, the schedule incorrectly shows that Activity D can start as planned. As a workaround Activity D may have SS + lag logic to each predecessor activity but it even further increases complexity due to the problem N1.

Problem N3
If Activity C has a delay after it commences, it will push the milestone but not Activity D.

Problem N4
The SS + lag logic actually doesn’t address Benefit N2. Projects can still hide delays with the SS + lag by not updating the duration of lag correctly.

Problem N5
This workaround only works if the lag calendar is the same as the predecessor calendar. Popular planning tools have limited capability to configure lag calendars. Microsoft Project applies successor calendar. Primavera allows configuring the lag calendar, but configuration can only be done for the whole project, and it could be scenarios when it lag calendar need to be associated with successor duration, not predecessor.

Problem N6
The above problems may impact the result of the Monte Carlo Simulation Analysis. Read more on the post: Artificial activity split problem. Resource levelling challenge.

Summary

Activity overlaps are beneficial but must be correctly simulated in the project delivery model.

Application of the ‘SS + lag’ logic is relevant for some scenarios, but it is not a universal way to represent activity overlaps. This approach has downsides that downgrade potential benefits.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Resource levelling challenges. Time optimisation.

Resource levelling challenges. Time optimisation.

A recent LinkedIn poll and discussion revealed that planners/schedulers rarely use automatic resource levelling. They prefer to level resources manually or leave the resource constraint challenge to someone else. As some of you have expressed curiosity regarding the poll results, I have decided to expand on this topic in more detail and explain resource levelling challenges.

The resource-constrained project scheduling problem

The project resource levelling challenge is known as the resource-constrained project scheduling problem (RCPSP).

RCPSP is a classic optimization problem in project management. In this problem, the objective is to schedule a set of tasks within a specified time frame while considering the limited availability of resources.

Projects’ optimised delivery is achieved when the optimum use is made of the critical resources and the flow of work is harmonised. This was the primary focus of Henry Gantt’s work when he developed his famous Gantt chart and it was also the mission of Kelley and Walker when they developed the original CPM system. Not long after, the Program Evaluation and Review Technique (PERT) and the Critical Path Method (CPM) were invented, RCPSP surfaced, capturing the worldwide attention of researchers worldwide due to its broad practical relevance and intriguing combinatorial complexity. RCPSP takes into account both tasks (activities) dependencies and limited availability of resources to execute a project.

In 1975, Professor Leonid Kantorovich, USSR, and Professor Tjalling C. Koopmans, USA, received the Nobel Prize in Economics for their contributions to the theory of optimum allocation of resources.

A search on The European Journal of Operational Research yields 741 articles dedicated to RCPSP, 257 of which were published in the last 5 years. The high volume of articles dedicated to RCPSP doesn’t relate to Generative AI breakthroughs. Unfortunately, recent exponential progress in Generative AI doesn’t help to progress with the resource-constrained project scheduling problem, as planning is a different domain of human and artificial intelligence.

At the same time, the first AI-based systems that help optimise resource constraint schedules were developed in the 80th. Some of the planning tools today could also be used for RCPSP, but as we can see below, not all of them have sufficient quality algorithms.

The history of levelling algorithms is a fascinating story that remains ongoing and warrants a dedicated post.

RCPSP demonstration

To demonstrate the challenge, I have developed a simple scheduling fragment. The schedule has nine activities grouped into three streams, and one full-time skilled resource is required for each activity.

Three types (skills) of resources (Team A, Team B, and Team C) are needed to deliver this work.

Let’s analyse resource demand:

The project needs three ‘Team A’, two ‘Team B’, and two ‘Team C’. If they are available, the final milestone will be delivered in 120 days. The critical path is Activity 1, Activity 2, and Activity 3.

However, the developed schedule is feasible only if the required demand is available.

Let’s assume that only one ‘Team A’, one ‘Team B’, and one ‘Team C’ are available.

To be able to develop a feasible schedule, additional resource dependencies (resource logic) need to be identified. We aim to find a combination of resource dependencies that give us the shortest overall project duration. We will discuss RCPSP with other optimisation criteria in future posts.

There are three ways to perform levelling:

  • Automatic: use a built-in scheduling algorithm
  • Manual: level resources manually
  • Semi-automatic: use an automatic scheduling algorithm but manually configure adding priorities to point algorithms in the right direction.

There are multiple ways to sequence activities, and we must try them all to find the optimal solution. As the example is relatively simple, it is possible to do so. Still, it will be a time-consuming exercise. Real-life schedules have more activities and resources. The overall number of combinations depends on the number of activities and resources and grows exponentially, so even with very powerful computers, it is impossible to find the results of all combinations for even relatively small scheduling fragments.

Automatic resource levelling

Microsoft Project, Primavera P6 and Spider Project can perform resource levelling. Microsoft Project and Primavera have one optimisation method, and Spider Project has five methods.  

Option 1

Microsoft Project, Primavera and Standard method in Spider Project gave us the same result: 168 days.

Microsoft Project

Primavera

Spider Project

The solution calculated in Spider Project and Primavera is identical. Let’s analyse it:

  • Almost all activities are on the resource-critical path;
  • Activities from the original critical path (A1, A2, A3) are still critical;
  • Team A and Team C don’t have idle time:
  • Four resource dependencies added:

Option 2:

Alternative levelling methods (‘Optimisation’ or ‘Optimisation Plus’ in Spider Project) found an alternative way to order activities:

  • This solution has fewer activities in the resource-critical path. Three of them have positive total float;
  • Activities from original critical path (A1, A2, A3) are still critical;
  • ‘Team A’ and ‘Team B’ have idle time:
  • Three resource dependencies added:

This option requires only 144 days for completion. It is on 24 days (14%) shorter than the Option 1.

Does this mean that the ‘Option 2’ also requires fewer funds? It is logical to assume that quicker should mean cheaper, but until we calculate it, we can’t be sure. We will test this hypothesis in future posts.  

Manual levelling

I asked my LinkedIn community to try to perform resource levelling manually, provide results, and advise on the time they spend on the optimization. Almost all participants found the 144-day solution and advised that it takes about 15 minutes to find the solution.

This scheduling fragment is actually VERY easy to level. As the first activity in the second stream (Activity B1) has a much longer duration than activities A1 and C1, it looks logical that starting from Activity B1 is not the best idea. Then there are only two options left: start from A1->C1 or C1->A1. It reduces the number of starting options from six to two.

One the options (C1->A1->B1) gives a better result. Additionally, it levels Team B.

Now, all we need to do is delay Activity B3. It gives us the 144-day result.

However, accomplishing this task proves challenging for standard built-in levelling algorithms. Microsoft Project applied A1 -> C1 -> B1, Primavera and Spider Project applied A1 -> B1 -> C1 solutions to start. The Optimization (Spider Project) method recognizes that starting from activity C1 yields the optimal solution.

 Automatic vs Manual levelling

Through this experiment, I have tried to demonstrate that even a simple scenario could be too complex for a planning tool. Participants in the experiment discovered a better solution compared to those automatically calculated by Microsoft Project or Primavera. Experienced planners who use these systems know that automatic resource levelling is a ‘never touch’ function, as the duration of the schedule is likely to be far from optimised.

At the same time, manual resoucre levelling is a practical option. It can be applied as an exception, not the rule. Exceptions are related to linear projects with one obvious critical (but limited) skill or only for small scheduling fragments. Still, for cost optimization all resources need to be optimised, not only those that impact critical path.

In all other circumstances, manual resource levelling is not a viable option. As previously mentioned, the challenge lies in the exponential growth of combinations, rendering manual levelling impractical for broader scenarios.

If we add only three new activities to the reviewed example, Activity A4, Activity B4, and Activity C4 (Team D), it takes much longer to find an optimised solution manually.

Resource volatility

Another reason why manual levelling is not a practical option is resource volatility. Conducting a one-time optimisation is not sufficient, as resource demand and supply are subject to constant fluctuations.

Demand is volatile due to variance in performance and accuracy in planning. Even if the required resource quantity is stable, time when resources are needed are likely to be different from the original plan. Supply is volatile due to demand from other activities (or even projects) and unplanned changes (broken equipment, sick leave, etc).

Project Delivery optimisation must be performed after each reporting cycle to ensure the project’s plan is still optimised and feasible.

In our example, altering the calendar of one resource may trigger a change in resource logic.

Re-levelling

When project conditions are changed to perform new levelling three steps need to be performed:

  • Remove all resource dependencies;
  • Find a new optimisation solution;
  • Add new resource dependencies.

Each step is time-consuming, especially when a planning tool doesn’t provide an opportunity to classify dependencies. This obvious feature is missing in Microsoft Project and Primavera P6.

It is a big help when a planning tool can perform these steps automatically. 

Optimised vs near-optimised

Once the result is calculated, we can not be certain if the proposed option is the best possible solution. To confirm this, we would need to examine all possible combinations. Even advanced RCPSP AI-powered systems can only identify near-optimized solutions, meaning that there is no guarantee that a better solution doesn’t exist. However, if a proposed solution is better than the project team that can find it manually, it is beneficial to use it.

Summary

  • The resource-constrained project scheduling problem (RCPSP) presents a complex mathematical challenge. While scientists continuously generate ideas to tackle this challenge, many of these ideas do not find practical implementation yet.
  • Different software packages produce different results for the same projects when resources are limited. Popular scheduling tools often fall short in effectively optimizing resource-constrained schedules. 
  • Resource volatility adds another layer of complexity, requiring ongoing optimisation to adapt to fluctuating demand and supply.
  • A recent LinkedIn poll highlighted the infrequent use of automatic resource levelling by planners/schedulers, who often opt for manual methods or delegate the task altogether.
  • The software that calculates the shortest resource-constrained schedules may save a fortune for its users. 

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

Quantitative Risk Analysis vs Risk Matrix

Quantitative Risk Analysis vs Risk Matrix

The current Global Project Risk Management maturity level could be described as a stage in which the risk management community has already learned the Risk Matrix (RM) downsides but still hasn’t learned quantitative risk analysis (QRA) pitfalls.

Again and again, project risk consultants identify the downsides of the project RM tool and recommend QRA as the better alternative. Unconsciously, they compare the negative sides of RM with the positive sides of QRA.

Monte Carlo Simulation, as the most recognised QRA method, is becoming more popular but most risk simulation tools and the ways how they are applied have some important functionalities missing. This makes the results of this simulation unreliable.

The next stage will be the discovery of QRA downsides.

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma

FLEX Metrics, original story

FLEX Metrics, original story

When I was working on a post about the FLEX metric, I dug into the PlanningPlanet forum to get some examples and, by chance, found an original post when the idea of the new metric was discussed. I found this story to be very interesting and decided to save it and share.

If you are too busy to read the full story, you can read highlighted text only. 

This story demonstrates the importance of a ‘collaborative space’ where practitioners can share ideas, challenge each other and develop new methods, techniques and tools.  

Enjoy.

Planning Planet Forum. Topic “To split or not to split FF Activities’ (23 March – 16 April 2013)

Participants:

  • Rafael: Rafael Davila, Project Management guru
  • Vladimir: Vladimir Liberzon, Spider Project product owner
  • Steve: Stephen Devaux, Project Management guru

Grammar and spelling remain unchanged.

v

Sat, 2013-03-23 02:20 (Rafael):

For purpose of the discussion I am to Call FF Activities all duration type activities that have a FF predecessor link.

A FF link can create reverse logic computations that are relevant to scheduling strategy. If the FF link is driving the duration of the activity might be increased without this causing a delay on the project duration and at times it can even cause a reduction.

By adding a zero duration activity whose predecessors are equal to the FF Activity and whose successor is the FF Activity we can determine a duration amount by which the activity can be increased before the FF link stop being the driving link, will be equal to the free float of this zero duration split activity.

I am looking for some functionality that will disclose this duration without the need for the user to create the split. It will be computed only for activities with FF predecessors and will be used for the user to determine if the activity will be split, if the duration will be increased or left as is; a management decision, not a computer decision, on a one to one basis, one of my reasons to be an avid advocate for continuous activity scheduling.

Of course being an advocate of continuous activity models means this issue must be identified and handled in an efficient way by our continuous activity models.

Perhaps some right click functionality can create the split for the user, this split will be a new duration type activity of zero duration with same start links as the original and a second activity linked FS(0) and remainder definition similar to the original. Then the user will determine how to spread duration and resources among the activities. If there is need for more splits the functionality can always be applied to the remaining activities.

Any comments, corrections and alternate strategies will be welcomed.
Best Regards,
Rafael

v

Sat, 2013-03-23 23:16 (Steve):

Hi, Rafael.

We have of course discussed this previously.  You know my feeling — that the continuous activity is an assumption that:

  1. Can delay a project schedule (thus perhaps making the project less valuable, more costly, and sometimes costing human lives) for no reason other than the way the CPM algorithm was programmed in the 1960s.
  2. Is an incorrect assumption the vast majority of time — more than 99% of the time, the activity CAN be interrupted.

That said, we don’t have to debate the issue and can agree to disagree as long as the software does precisely what you are proposing! I would be supremely comfortable with such a solution! I absolutely agree that a software package that uses the continuous activity assumption should:

  1. Make the user aware of the issue and its potential impact (most users don’t have a clue about it!); and
  2. Point the user correctly to the FF (or SF, by the way!) relationship that is causing the constraint/delay.

It seems to me that the simplest way to accomplish this if the activity is on the critical path is through drag computation: a task which is causing the reverse critical path anomaly will have negative drag.

This of course makes no sense, as drag is a measure of how much an activity is pushing out the end of the project, and an activity cannot be pushing out a project by a negative amount! It is actually NOT the task that is delaying the project, but the FF or SF relationship! And that is where the software should point the user in diagnosing the cause of the project delay. Relationships and constraints, including lags, dependency relationships, calendars and calendar constraints, and resource constraints can ALL have critical path drag, and the software should diagnose it, identify the cause of the drag and measure it.

None of this, of course, addresses the case of the FF or SF relationship delaying an activity that is NOT on the critical path and therefore does not have critical path drag. It’s less important (less critical?) of course if it is off the CP, but still has an impact. That impact is to decrease the activity’s or path’s float. I believe I am correct in saying that the issue only occurs when an activity has BOTH an FF or SF predecessor AND an SS or SF successor. Perhaps the software could simply look through the network, identify any such activities, and list them for the user as areas where attention might be warranted?

So glad to see you looking at this, especially in the Spider forum, as Spider is seems to be the only s/w package that both computes drag and seems aware of this as an issue.

Fraternally in project management,

Steve the Bajan

v

Sun, 2013-03-24 00:36 (Rafael):

Deveaux,

What is wrong is to assume splitting is always in order, that can be anything the computer decides.

Continuous models assume nothing nor it does prevents splitting, if you want to split one such activity or all will be a management decision, once you split the activity it is better than equivalent as you still got to decide where to split 50-50, 90-10, or perhaps 60-60 (>100) as very frequently split comes at a cost and overall duration will have to be increased as well as resources.

Users of the continuous model are aware but some extra functionality might help to identify such suspects for splitting.

You said the majority of activities can be split, I would add it is not only that can be split but that it is better if you split them on the majority of cases. Of course this does not solve the issue on how to split. I would tend to split my activities on my own terms and keep the free float between the last two splits as a buffer. If the activity is neither critical or near critical I might be tempted not to spilt the activity, the benefits of continuous work will prevail.

Because criticality changes as the job progress the desirability to split or not to split might change, therefore the need for continuous monitoring of metrics that will warn you, good management is a continuous operation.

Maybe a “Merge” functionality will also be of help. Perhaps double links shall be taken into account on the split and merge functionalities.

Best Regards,
Rafael

v

Sun, 2013-03-24 00:49 (Steve):

Hi, Rafael.

First, I was trying to avoid getting back into our debate about the software’s assumption of continuous or non-continuous activities. We agree on almost everything, including all that you have said about making management decisions about where to split the activity.  We seem to agree that:

  1. An activity can almost always be split if it is valuable to do so.
  2. Where we split an activity should always be a management decision.
  3. The software should point the user to where the constraints are that might result in a reverse CP anomaly.

All that is agreed. The only place we disagree is:

What should be the default that the SOFTWARE, not the user/scheduler, applies in creating the schedule. Again it’s NOT a disagreement about what the user should do, only about what the algorithm, which has been programmed “generically” by someone who has not a clue about the specifics of the project, should do.

  1. You are used to the continuous activity assumption that is programmed into CPM algorithms so that you seem not to see the continuous activty default as an optional assumption. The software HAS to make an assumption and it chooses to make the assumption that the activity is continuous, which often makes the project longer in a way that is not clearly visible to most users. We seem to completely agree that this should be made more visible, and I am quite comfortable with using the continuous assumption provided that this is made highly visible and the software identifies the drag with the delaying constraint/assumption.

The other possible default that the software could use is to generate the shortest possible schedule (which most users actually think that the software is doing!). This is certainly a legitimate alternative, and is both practical and better (i.e., shorter schedule) in the vast majority of instances. But once again, the software must show the user what it is doing by: 

  1. Emphasizing the early start date of the SF/FF successor as being the prime driver;
  2. Stating that it is splitting out a milestone as the activity’s SF/FF finish; and
  3. Telling the user the amount of time being saved on the project duration through the input of the milestone. 

By the way, the Sumatra.com Project Optimizer add-on to MS Project that computes drag does all of this and, additionally, does it ONLY if the user specifically elects to implement “optimization” of the schedule by splitting out milestones on activities with SF/FF predecessors. Once the user does this, they can see the impact and then go in and change where the activity is split, if they so choose.

Again, we are in complete agreement about what the user/manager should do, and about the fact that the software should act in a transparent and helpful manner.  Our disagreement is just about the CPM algorithm’s default, which I believe should be to emphasize producing the shortest possible schedule, not assuming a longer schedule in a not-very-transparent way due to making an assumption (i.e., that the activity MUST be continuous) that is almost never valid.

Fraternally in project management,

Steve the Bajan 

v

Sun, 2013-03-24 03:41 (Vladimir):

Rafael,

do you mean finding minimal free float of preceding activities (in case calendars are the same)?

Sometimes to split is necessary for optimizing project schedule like in this example:

v

Sun, 2013-03-24 06:06 (Rafael):

I am looking for a strategy to decide when to mimic P3 contiguous scheduling. On the second figure Task B is split into intermitent work, it reduces the duration of the overall job, although in other ocassions might not. P3 solution is not controllable and do not recall disclosing the distribution of the split(s). I prefer to decide when to split and how.

I am looking for a metric that will tell me how much the duration of Task B  can be increased without delaing end of (itself) task B.

Then if I decide to split some functionality to make the split easy.

v

Sun, 2013-03-24 06:35 (Vladimir):

Rafael,

it is an analogue of superfloat but to the start direction.

v

Sun, 2013-03-24 08:56 (Rafael):

I believe both are of much value.

I am looking for a value that is independent of total float – “I am looking for a metric that will tell me how much the duration of Task B  can be increased without delaying end of (itself) task B.” Something that might happen only under reverse logic path.

Best Regards,

Rafael

v

Sun, 2013-03-24 18:42 (Vladimir):

Rafael,

it works in both directions. Existing Superfloat – Total Float shows duration flexibility in forward direction, you want to add the same to the backwad duration. Together they will show activity duration flexibility in both directions.

Best Regards,

Vladimir

v

Mon, 2013-03-25 11:27 (Rafael):

Exactly I want to see flexibility in both directions, about the math you know better than I do, most probablhy you will not have to use the dummy split to figure it out by going to the core CPM calculations.

My guess is that it shall be = Activity Early Start – Max {Early Finish + lag of all Activity Start Predecessors}

v

Mon, 2013-03-25 15:49 (Vladimir):

Rafael,

my intital estimate also went this way (remember I wrote about free floats of preceding activities).

Just imagine two activities linked by both SS and FF. Succeeding activity duration is smaller, preceding activity is critical (no floats, no lags).

v

Mon, 2013-03-25 19:54 (Rafael):

Vladimir,

But how would you name this value? Maybe on ADM literature there is a name for similar value that makes sense under ADM jargon but we are on PDM and although today PDM is dominant I have not being able to find a reference to it.

Best regards,

Rafael

v

Mon, 2013-03-25 21:09 (Steve):

Rafael and Vladimir,

“Together they will show activity duration flexibility in both directions.”

This will be very nice.  It would dissolve most of my concerns about computing drag with the continuous activity assumption.

“My guess is that it shall be = Activity Early Start – Max {Early Finish + lag of all Activity Start Predecessors}”

After some thinking, that seems like the right formula to me, Rafael.

And if I might be so bold as to suggest, “flex” (or “critical path flex”) would not be a bad name…

Fraternally in project management,

Steve the Bajan

v

Mon, 2013-03-25 23:25 (Rafael):

Steve the Bajan

Agree, I like the idea about using the term Flex/Flexibility so much because is a descriptive term that say much about the value, as usual Vladimir is on the bulls eye with his choice of words.

Perhaps Start Flex and Finish Flex (Superfloat) can be considered. The instant response by Vladimir to relate it to Superfloat makes me believe that Superfloat can be re-named to Finish Flex as to show this relationship.  In this way when a new user looks at the terms he/she will be confronted with the relationship between both values and activity duration.

Best Regards,

Rafael

v

Tue, 2013-03-26 04:40 (Vladimir):

Ok Guys,

Thank you for proposal!

Flex is accepted.

Regards,

Vladimir

v

Tue, 2013-03-26 15:57 (Rafael):

Vladimir,

Many thanks in advance, you know how tedious these computations can be if done outside the CPM but now I will have it at a click of the mouse. I will compare values after unleveled schedule run versus after leveled schedule run.

I have no doubt you have investigated this and many other similar computations.

Not sure if it makes sense to create some special functions that can make these values available for formula usage. I know of no software that allows the user to explore these computations in an easy way. 

Something like:

  1. CPM(min/max, ES/EF/LS/LF, pred/succ)
  2. others you as a developer might think

Best Regards,

Rafael

v

Sat, 2013-04-13 23:24 (Rafael):

Vladimir,

Thanks, it came out perfect,  AWESOME, all float types are visible at the columns selected for display as well as at the activity dialog box. Not only in hours but also in days, very convenient for when some workdays have less hour than the others.

As you can see with the Start Flex I will be able to filter active reverse logic and will be able to know available slack if I split the activity. This particular value to me is extremely relevant when using FF/SF links that can cause reverse logic. A topic I rarely see people debating, as if float values do not matter except only Total and Free Float for delay claims. My focus is not on claims but on using the model as a practical planning tool to get better plans.

[Broken image]

Under resource leveling it becomes even more interesting, the activity splits can be driven by resource dependencies, you might have activities with Start Flex and no Total Float. It is important to know how these splits will impact the schedule as in some cases starting ahead work on some split can impact other unsuspecting activities via the resource dependencies. Just another reason why intermittent work shall be transparent, another reason why interruptible schedule durations on a single activity is such a bad model.

[Broken image]

I recall long ago advanced software would show several types of float in addition to the very basic Total and Free Float, I recall Artemis by reference when it was a mainframe only software. With these values there is much understanding of the logic, with these you do not have to go and search all predecessors/successors links to find out about what is driving and near [insane and impractical for everyday procedures]. I remember the literature would tell you about how to use them. I guess I will have to do my homework and re-learn lost knowledge.

I was frustrated before finding about Spider, I was starting to believe we were moving backwards and nobody cared, or perhaps that I was wrong. But I know you would not add functionality if it does not add real value, so maybe I was not that wrong. At times I think about following the masses and move to low quality and full of bugs but best selling software, but find it impossible to lower the standard and miss such functionalities.

Guess Spider team will have to continue competing against themselves, the gap is big and continues widening.

Best Regards,
Rafael

v

Tue, 2013-04-16 01:53 (Rafael):

Vladimir,

Some developers insist on negative float to be embeded into the total float field in order to display criticality of unattainable date constraints. This is not necessary to show criticality, it is similar to making open ends critical without use of functionality that creates negative float.

The application of impossible date constraints to Late Dates that are prior to Early Dates distort not only Total Float values but also other float values. This makes other float values to loose meaning, make them erroneous.

I believe this might be one/main reason why some developers do not show other types of float.

Best Regards,

Rafael

v

Tue, 2013-04-16 03:51 (Vladimir):

Rafael,

let’s imagine that activity has total float –5. What happens if it will be delayed for 10 days?

I don’t know the answer to this question without detailed schedule analysis.

One of the options – nothing, because next activity has TF=-15.

Float values shall be useful for decision making and show what may happen with the scheduled dates in the current schedule.

v

Tue, 2013-04-16 08:48 (Rafael):

Vladimir,

Trying to reason the logic on negative total float and the manipulation on late dates makes no sense, in any case makes a joke. 

Take a look at the following schedule, if you run it on software that do such manipulation will give you negative float for the first two activities, will give positive free float for the first and zero for the second activity. Being zero and positive values greater than negative values it will display free float values greater than total float, it does not makes any sense.

[Broken image]

There is a logical relationship between different float definitions, calculations are based on Early and Late Dates. If you tamper Late Dates with artificial values the relationship will be broken.

If you add resource leveling it becomes beyond any comprehension.

Best Regards,

Rafael

Alex Lyaschenko

PMO | Portfolio Planning & Delivery | PMP | P3O Practitioner | AgilePM Practitioner | Six Sigma