Need advice? Call Now, Schedule a Meeting or Contact Us
The article discusses the long-standing reliance on the Critical Path Method for assessing delays and disruptions in project management.
For the last 50 or more years, the internationally recognised approaches to assessing delay and disruption have been based on the forensic assessment of a CPM schedule1. The premise being there is a well-developed critical path schedule that defines the way the work of the project will be, or has been, accomplished. However, there was no concept of a critical path before the 19572, and in the 21st century, there are many projects where the critical path method (CPM) is simply not used or does not represent the way the work is accomplished.
The legal concepts of delay, disruption, extensions of time, and liquidated damages (the legal framework), were defined many decades before CPM was developed. In more recent times the advent of agile, lean, and other team-driven approaches to managing projects have been shown to be incompatible with the fundamental concepts of CPM. Earlier papers in this series have also shown distributed projects such as erecting wind farms, or repairing potholes after a flood, are another type of project that has no particular requirement for the work to be undertaken in any pre-defined order, which again makes CPM suboptimal3.
The key management objective in both agile and distributed projects is optimising resource utilisation and consequently the effect of any intervening event must be considered in terms of the delay and disruption caused by the loss of resource efficiency, rather than its effect on a predetermined, arbitrary, sequence of activities.
The focus of this paper is to offer a practical solution to the challenge of assessing delay and disruption in agile and distributed projects where the traditional concept of a critical path that must be followed simply does not exist.
The need to assess delays and disruptions affecting the delivery of a project is linked to the existence of a contract requiring the predefined scope of a project to be completed to the required standards, within a defined period. If there are no contractual obligations (typical with internal projects) normal project control functions that meet the requirements of the organisation’s management are appropriate4. This requirement changes as soon as there is an external client and a contract.
When a contract is in place, the obligations defined in the contract documents are legally enforceable. The premise contained in the common law5 is that the contract defines the agreement between the two parties, and both are bound to comply with all the contract terms and conditions. This means that if one party fails to perform its obligations under the contract, the other party is entitled to be compensated for the resulting breach of the contract terms.
In law, the damages caused by a breach of contract are usually reduced to a financial payment that will, as as practical, leave the disadvantaged party in the same position it would have been had the breach of contract not occurred6.
The common law framework also includes a presupposition that the only way a contract can be altered is by mutual agreement. This works perfectly well for simple transactions, and in situations where the parties to the contract are willing to work together. In more complex situations, the contract document needs to be drafted to deal with a range of foreseeable issues such as the need to make changes to the scope of the contract, and how other unavoidable delays will be dealt with.
Consequently, most well drafted contracts will incorporate clauses defining the processes for:
These processes should be fair to both parties but may not be. However, Common Law expects both parties to honour the contract they have chosen to sign.
Mercantile transactions tend to be straightforward and shaped the Common Law for centuries. The law of contract was primarily focused on trade prior to the 1800s, and this branch of contract law continues to be important through to modern times. This started to change towards the end of the 18th century with the need for judicial oversight of contractual disputes on the increasing number of engineering and construction projects required to build the infrastructure needed for the industrial revolution.
Unlike mercantile transactions, engineering projects are far more complicated, typically take longer to complete, and the contract needs to incorporate processes to assess the effect of a change and calculate the time and money needed to offset its effects. Consequently, a new branch of contract law focused on building and engineering contracts emerged and was refined through the 19th century. Construction and engineering law continues to evolve into the 21st century, and in many jurisdictions today, has its own specialist courts, judges, legal practitioners, and allied experts. This trend is now starting to extend into the realm of IT contracts7.
As the sophistication of this branch of the law developed, drafting one-off contracts to manage engineering and construction projects became increasingly complex and expensive. This led to the development of standard forms of contract designed for use in a specific industry. Standard forms of contract tend to reduce cost, improve quality, and help develop a culture of conformance. But they also set expectations and defined norms of behaviour.
Some of the earliest standard forms of building contract were created in the UK in the second half of the 1830s. Both the Builders Society (now the CIOB) and the Royal Institute of British Architects (RIBA) were formed in 1834 and were concerned with contractual matters. Together, these organisations developed a set of standard forms of contract for building projects.
One of the issues the early contracts covered was accommodating the need for the client to make changes to the building design after the contract was signed. However, for clauses enabling this type of change to be legally acceptable, there also had to be a provision for the contractor to be properly compensated for the consequences of the change. Normally this involves a financial payment together with an extension to the time allowed to complete the project8. Typically, both the cost and time are determined by agreement, or in the absence of an agreement by the decision of a named third party (typically the Architect), or if this is not accepted an impartial third party (often an Arbitrator).
The inclusion of extension of time (EOT) provisions in a contract interacts with another contract component, liquidated damages (LDs). Most contracts include a clause setting a pre-defined estimate of the number of damages the client would be entitled to receive per day if the project finishes late (pre-estimated and liquidated damages)9. LDs make recovering the cost of a late completion much simpler than proving actual damages, but the client can only recover damages for late completion if there is an enforceable contract completion date. As soon as the completion date is at large10, there is no practical way for the client to recover damages for a late completion by the contractor. Granting an EOT to compensate for any changes to the contract preserves the completion date, and therefore the right to LDs if the contract finishes later than the approved extended date for completion.
The wording of the clauses allowing the granting of an extension of time vary, a typical example from the 1930s11 is “it shall be lawful for the engineer …... to grant from time to time …... Such extension of time for completion …… as to him may appear reasonable.” Implementing this type of contractual obligation required the architect or engineer to assess the effect of any change and then award an extension of time for the completion of the project. From the 1830s on, Court records show this function was performed regularly and while there were many disputes, when an assessment of the delay was made properly, most of the time the amount of time granted by the architect does not seem to have been contested, and even when it was, the court had the power to correct the architect’s determination.
In summary, the ability of the parties and the Courts to assess delay and disruption was developed in the 100-year period before critical path scheduling was invented, and many of the cases referenced above are still the foundation of contract law in the 21st century. It therefore seems logical that extensions of time can be legally determined today without relying on a CPM schedule in situations where the critical path technique is inappropriate. The challenge is developing a way of assessing the consequences of a delay event without a CPM schedule.
CPM theory and calculations have been in widespread use for more than 65 years. CPM was developed in 1957 and by the early 1960s CPM and PERT had merged into a general approach to network scheduling12. The fact CPM has survived unchanged through to the present is because CPM schedules are useful in a lot of situations.
Critical path theory assumes there is one best sequence of discrete activities, that must be completed in a pre-defined order, to deliver a project successfully. This assumption creates a perception of certainty that the schedule accurately defines how the work will be accomplished, allowing the critical path and float to be calculated13. Based on these calculations, the effect of progress and delays can be assessed and apportioned between the parties to the contract. Over the last 50 or so years, the theory of CPM scheduling has underpinned:
These developments have led to two, complementary frameworks for assessing delay and disruption, the AACEi Recommended Practice 29R-03 Forensic Schedule Analysis and the Society of Construction Law Delay and Disruption Protocol (2nd edition)14.
The foundation of both frameworks is a well-constructed and maintained, contemporaneous, CPM schedule. Preferably developed before the work commences, although ‘as-built’ schedules created after the project finishes are used in some of the analytical processes15.
There are recognised problems with these approaches to schedule delay analysis, most notably different experts can produce vastly different answers to the same question by either using different analysis techniques or by using the same approach but making different assumptions. The various courts and tribunals hearing contractual disputes are well practiced in resolving these differences. The approach is not perfect but is the best we have for projects where the basic CPM assumption of one-best-way of accomplishing the project’s work holds true.
Fundamental problems with this established approach start to arise when the intrinsic nature of the project allows different sequences of working to be adopted (and changed) without detriment to the overall progress of work. This means a CPM schedule cannot define the one best way of undertaking the work, because there are many equally viable alternatives. In some situations, there may be a CPM schedule showing the intended sequence of working at the start (but this can be easily changed), in others there is no predetermined sequencing of the work. Approaches such as lean and agile are based on encouraging flexible decision making on what to do next as the work progresses. A different paradigm for assessing delay and disruption is needed for this type of project.
In an earlier paper, Scheduling Challenges in Agile & Distributed Projects16 we developed four classifications for projects based on the applicability of the CPM approach. Under this classification system:
The problems with Class 4 projects are:
As outlined above, CPM works well in Class 1 & 2 projects, but is suboptimal in Class 3 & 4 projects. The options for managing Class 3 projects include the various forms of agile, lean, and other similar approaches. The common factor is the people doing the work decide on the next set of activities to undertake at regular, relatively short, intervals. The decisions are made based on the current situation, any overall project requirements or road map, and any identified constraints or specific sequencing issues. The planning process is iterative and is repeated until the project work is complete.
There are many different methodologies and tools designed for use in this type of project, most focus on optimising resource efficiency in the short term. Some of the more common tools in use are considered in Calculating Completion17.
Calculating Completion identifies two tools that will work effectively for assessing the status and predicting the completion of Class 3 projects (as well as Class 1 & 2):
However, neither of these tools can be used for the day-to-day control of the work (they are predictive tools). Detailed forward planning needs other techniques such as bar charts, CPM schedules, or other agile techniques.
Consequently, while ES and WPM can predict the likely project completion date accurately, and their predictions are more accurate than CPM in Class 1 & 2 projects, they cannot be used for assessing the delay caused by a specific intervening event20. Both tools look at the overall performance to date and use this information to calculate the project status and predict its completion. Delay analysis needs the segregation of both the cause and effect for each individual delay event to assign responsibility between the parties to a contract.
The basic principles of assessing delay and disruption involve:
These basic steps do not change, but in the real world there are many complications and difficulties including various delays occurring in parallel, work occurring out of the planned sequence, and the flow-on effect of one delay event to another. Most contracts require these issues to be resolved as the work progresses, in disputed contracts the final decisions are usually made by an Arbitrator or court long after the project was completed21.
Courts are increasingly rejecting CPM evidence for Class 4 ‘distributed’ projects, two examples are:
This dispute involved the construction of a water treatment works involving multiple structures. A CPM schedule was used as a basis of a time claim and delay was proved to two foundations using a ‘windows analysis’ which is a recognised way of assessing EOTs and is included in both the SCL Delay and Disruption Protocol and the AACEi Recommended Practice No. 29R-03 Forensic Schedule Analysis. The two experts engaged by the parties only disagreed on the extent of delay.
However, the basis of the forensic CPM analysis used by the experts was rejected by the Judge:
[Clause 185] ….no evidence has been called to establish that the delaying events in question in fact caused delay to any activities on site apart from the RGF and IW buildings.
[Clause 233] …. experts have agreed that the delays to the RGF and IW [foundations] were critical delays since those buildings were on the critical path of the project at the relevant time. Ordinarily therefore one would expect, other things being equal, that the project completion date would be pushed out at the end of the job by the same or a similar period to the period of delay to those buildings. However, as experience shows on construction sites, many supervening events can take place which will falsify such an assumed result. For example, the Contractor may rearrange his programme so that other activities are accelerated or carried out in a different sequence thereby reducing the initial delays.
This judgement suggests that on distributed projects where the work sequence can be changed with relative ease, the use of a predetermined CPM schedule that shows only one way of accomplishing the work will not support an overall delay claim – more information is needed.
This dispute related to the construction of a 100-lot subdivision on the NSW South Coast including the construction of roads and underground services. There were delays in approving the sewage system, and a large part of the dispute centred on the consequences of this holdup.
Delay experts were engaged by both of the parties; to construct an as-built CPM schedule but they disagreed on methods and the evidence of the experts was mutually contradictory. To resolve this impasse, the Court appointed its own expert Mr McIntyre and based many of its findings on his report.
The judgement includes:
[Clause 195] Mr McIntyre’s opinion, upon which I propose to act, is that neither method [used by the parties’ experts] is appropriate to be adopted in this case.
[Clause 196] Mr McIntyre’s opinion, upon which I propose to act, is that close consideration and examination of the actual evidence of what was happening on the ground will reveal if the delay in approving the sewerage design actually played a role in delaying the project and, if so, how and by how much. In effect, he advised that the Court should apply the common law common sense approach to causation…
Whilst there was evidence that approval of sewer designs was delayed for a period during construction, there were no details concerning how the delayed work affected the progress of other aspects of construction The contractor’s resources were kept busy working for the full period – therefore no proof of an actual delay.
The first thing to note in both cases is the event causing a delay to a significant element of the works was proved – work was delayed. What was not shown was if the delay to one element of the work on a distributed project flowed through to cause a delay in the overall completion. The judges refused to accept traditional CPM analysis on the basis that the CPM activity sequence was not shown to accurately reflect the reality of what occurred. These findings do not mean there was no overall delay to the project, rather the approach used by the claimant’s experts to demonstrate the delay based on CPM was not valid in a situation where there were many different, equally effective, ways of completing the work.
This problem affects:
In these types of projects, the traditional concept of a critical path does not exist. There may be a high-level road map outlining the desired route to completion and/or specific constraints on parts of the work but overall, there is a lot of flexibility in the way the work may be accomplished. In many cases, particularly projects using various agile methodologies, there is a deliberate management intent not to follow a predetermined sequence of activities defined in a CPM schedule.
But, without a CPM schedule, there are no generally recognised techniques for assessing delay and disruption. The SCL Delay and Disruption Protocol makes a point of separating the cost of disruption from the right to an extension of time (EOT), but its approach is still dependent on a valid CPM schedule. Without the schedule, there is no recommended approach to determining the cost of the imposed inefficiency or determining the consequential delay (if any). A new paradigm is needed.
The challenge is developing effective protocols for assessing delay and disruption in the absence of CPM. The good news is, both the SCL Delay and Disruption Protocol and the AACEi Recommended Practice 29R-03 Forensic Schedule Analysis both recognise other approaches to assessing delay may be valid, and the courts also are recognising this.
As discussed above, the traditional concept of a critical path does not exist in agile and distributed (Class 3) projects. There may be a high-level road map outlining the desired route to completion and/or specific constraints on parts of the work but there remains a significant degree of flexibility in the way most of the work is accomplished:
The courts have identified the failings in CPM when applied to distributed projects (Class 4), and the industry has identified the failings in CPM when applied to agile projects (Class 3). But without a CPM schedule, there are major challenges:
An effective solution to these problems is also likely to work on Class 1 & 2 projects, allowing the CPM schedule to be used proactively rather than contractually.
The techniques discussed in Managing Class 3 (agile and distributed) projects above achieve the first two points. Both ES and WPM24 will calculate the status of the project, and can be applied to sections of the work to assess progress by individual teams, trades, or areas of work. Then based on the progress to date and current performance they also calculate the completion date. However, these techniques are unlikely to provide much assistance in assessing delay and disruption, they are based on a holistic assessment of progress, and cannot be used to segregate the effect of one specific event from the other deviations from the baseline.
Both the SCL Delay and Disruption Protocol and the AACEi Recommended Practice 29R-03 Forensic Schedule Analysis state other common-sense approaches to assessing delay are valid but fail to document any such approach. However, the basis of any delay and disruption25 assessment remains the same. To prove a delay, you need to show:
While the basics do not change, this paper recommends shifting the assessment of delays and disruption away from its effect on an arbitrary sequence of activities in a CPM schedule, to understanding the effect of the intervening event on the productivity of the resources working on the project. The best way to make this assessment will depend on the nature of the intervening event. Four adaptations of this basic concept are outlined below.
Delays affecting all the work of a project are the easiest to assess. Events such as project-wide industrial action, major weather events, and other similar occurrences that stop the work simply need the duration of the delay to be determined. This requires a record of the time the event started and the time it finished. Sometimes this is clearcut, on others, agreement may be needed particularly around the end of the delay period if the return to work is staged over a period. Good record-keeping helps with this situation.
Many Class 3 projects are worked on by an integrated team of people, where one person can cover the work of another. This is common in many soft projects26 (particularly IT) but can occur in other situations. A team of people are assigned to deliver the project, and they work as a homogenous, cross-functional team. In this situation, a delay occurs when an intervening event reduces the productivity of the team. The event may cause a 100% loss of productivity or a partial loss. Where a partial loss of productivity occurs, this needs to be adjusted to an equivalent period of total loss.
For example, a major storm causes an evacuation of a city where an IT project is running. The starting point is usually obvious – the evacuation order. The end point, duration, and overall effect may be less clear. If the IT company has disaster management and remote working capabilities, some people may be able to resume work before the storm damage is cleaned up and the office reopened. But these people are likely to be less efficient than when the full team is working together in a fully equipped office. A proper assessment of the full delay requires the inefficiency caused by loss of productivity to be granted as a justifiable delay in addition to the period of 100% shutdown.
This means any assessment of the total delay needs to consider the percentage of resource effort lost on each day. In this example assume 8 out of the 10 people on the team can work remotely, and their productivity while working remotely is assessed at 75% of normal. This means when working remotely the team is achieving 80% x 75% = 60% productivity per day (a 40% loss of productivity).
In other Class 3 projects, typically hard projects27, the work is delivered by a series of discrete resource teams working in coordination. The members of each team are not interchangeable, having different qualifications and skillsets. In this situation assessing the disruption to the key resource workflow may be a more appropriate way to measure delay.
In a typical hard project, there will be a few trade contractors, each responsible for a discrete element of the overall product. Normally, one of these trades is the driving resource that controls the rate of work of all the other trades working concurrently. For example:
On many projects, the driving resource that controls the overall rate of progress is likely to change throughout the project. In a windfarm the driving resource are usually:
Peripheral activities such as the building of the switchyard, commissioning the control systems, and installing HV transmission lines, are unlikely to cause project delay. In CPM language, these activities have float. Delays or disruption affecting these sub-critical workflows are expensive but are unlikely to delay overall completion. If the work of a subcritical crew is delayed to the point where the overall project is delayed, the delay manifests in a delay to the driving resource.
The difference between a CPM project and a Class 3 project is if the controlling resource crew cannot work on one element (e.g., a particular turbine tower), they can often simply relocate to another, work is continuous but may be less efficient. Therefore, the extent of a delay is not measured by the time work on a specific activity is stopped, rather by its effect on the driving resource.
For example, the main crane working on tower erection is currently planned to work on tower erection in the sequence 14, 15, 16, 17, and 18. The planned time to relocate the crane and erect each tower is 2 days which includes the 500 meters travel between towers.
In this scenario, the crane has just finished Tower 14 and is derigging ready to relocate to Tower 15 when a major defect is identified in a component for Tower 15 preventing the tower being completed. Defect rectification is expected to take 4 days.
In a CPM schedule, the delay of 4 days to Tower 15 would show as a 4-Day delay to the project.
In a Class 3 project there will be delays, but the sequence of work now becomes 14, 16, 17, 15, 18. There is no downtime. The delays are caused by inefficiencies:
The number of crane moves is the same, the delay is the time needed to travel the additional distance – the rate of travel of big cranes is measured in meters per hour. Assuming the towers are evenly spaced at 500 meters apart, the distance involved is:
In Class 3 projects, the assessment of any delay must be separated from the cost of the disruption caused by the same intervening event. Some delay events (e.g., wind of 15 m/sec on a wind farm) may delay the primary crane but the wind will only affect the crane if it is engaged in a heavy / high lift, not if it is in the process of relocating to the next turbine. And, where a delay affects the primary crane, it will flow onto fit out, commissioning, and completion; but will have no effect on civil works, deliveries, and the tower base erection. Whereas other events, (e.g., a severe thunderstorm) will shut down all external works.28
The effect of the delay must be real. Consider an IT project where the next sprint is intending to complete a module from the backlog involving credit card validation. There are many other work-items, the card validation just seems to be a sensible thing to do next. Then the client asks for this module to be put on hold, they are looking at decreasing the number of card types. A couple of weeks later the hold is lifted. In this situation, the only disruption is rethinking what is best to include in the next sprint, the reselection occurs, and work goes on. Agile focuses on flexibility and in this situation being flexible removes the delay. It is only after the hold has been kept in place long enough to cause an effect on the work of the resources that a delay may occur.
Managing driving resource delays. The keys to applying this type of assessment to determine project delays are:
The only significant difference between this approach and standard contract EOT clauses, is the effect of the delay is measured against resource productivity rather than CPM activities.
Some changes do not have an immediate effect on productivity or require changes to the short-term work sequence but will delay the project. These are usually changes in scope. An IT project has additional features added, a windfarm project has a couple of additional towers added. Provided the change is made sufficiently early in the project, the only consequence is the team has more work to do.
Delays caused by changes in volume of work can be assessed based on the planned rate of production:
Note 1: Generally, contract law will not require a project to increase its resources to compensate for additional work. This arrangement can always be negotiated, but the legal requirement is to minimise the effect of the disruption using the available resources. Where additional or new resources are required, there is cost considerations.
Note 2: In all the above examples, the entitlement to reimbursement of costs for the delay will depend on the risk allocation in the contract.
Recognising the need for processes other than CPM to effectively assess delay and disruption is becoming increasingly important. While it is possible to develop an ‘as-built’ CPM schedule for almost any project after completion, the White Constructions v PBS Holdings judgement demonstrates this can be a highly subjective process. More important, effective contract management requires the effect of any intervening event to be assessed contemporaneously.
This paper has demonstrated CPM cannot provide a valid basis for assessing delay and disruption in a wide range of projects including:
The biggest challenge moving forward will be to overcome 65 years of practice welded to the view CPM is the only way to develop and manage schedules, control projects, and assess delay and disruption. Gaining recognition Class 3 and 4 projects need a different approach to Class 1 and 2 projects will be difficult. Class 3 and 4 projects need a different approach that complements the way work on the project is being managed. This paper offers one solution, there may be others.
There is more work needed in this area:
References
We use cookies to ensure you get the best experience of our website. By clicking “Accept All”, you consent to our use of cookies.