ng-Whatever

We’ve all done it, sat around a table dissing the previous generation of our product.  The previous set of engineers had no idea, made some stupid fundamental mistakes that we obviously wouldn’t have made.  They suck, we’re awesome.  You know what, in 3 or 5 years time, the next generation of stewards of the system you are creating or replacing now will be saying the same thing – of you are your awesome system that you are slaving over now.

So what changes?  Is the previous generation always wrong?  Are they always buffoons who had no idea about how to write software.  Unlikely.  They were just like you at a different time, with a different set of contexts and a different set of immediate requirements and priorities.

Understanding Context

The context that a system is created is the first critical ingredient for a system. Look to understand the priorities, the tradeoffs and the decisions that had to be made when the system was first created.  Were there constraints that you no longer have in place, were they restricted by infrastructure, memory, performance?  Were there other criteria that were driving success at that stage, was it ship the product, manage technical debt or were there gaps in the organization that were being made up for?  What was the preferred type of system back then?

Understanding these items allow you to empathize with system creator and understand some of the shortcuts they may have made.  Most engineers will attempt to do their best based on their understanding of the requirements, their competing priorities and their understanding of the best systems that can be implemented in the time given.  Almost every one of these constraints forces some level of shortcut to be taken in the delivering of a system.

Seek first to understand the context before making the decision that the previous team made mistakes.  When you hear yourself making comments about a previous team, a peer team or other group not doing things in the way that you would like to see it, look for the possible reasons.  I’ve seen junior teams making rookie mistakes, teams focused on backend architectures making front-end mistakes, device teams making simple mistakes in back-end systems.   In each of these contexts, it is fairly obvious why the mistakes would be made.  Usually, it will be within your power to identify the shortcoming, determine a possible root cause by understanding the context and shore up the effort or the team to help smooth things over and result in a better outcome.

Constraining Your ng-Whatever

When faced with frustration on a previous system, consider carefully a full re-write into a ng-whatever system, or incremental changes with some fundamental breakpoints that evolve, refactor and replace parts of the system.

It is almost guaranteed that the moment a system gets a “ng-Whatever” moniker attached to it, it becomes a panacea for all things wrong with the old system and begins to accrete not only the glorious fixes for the old system, it will also pick up a persona of its own.   This persona will appear as “When we get the ng-whatever done, we won’t have this problem..”.

These oversized expectations begin to add more and more implicit requirements to the system.  Very few of these will be expectations will be actually fulfilled, leaving a perception of a less valuable ng-Whatever.

Common Defect Density

I’m going to come out and say that most engineering teams, no matter how much of a “Illusory Superiority” bias they may have are going to be at best incrementally better than the previous team.  With that said, their likelihood to have defects in their requirements, design or implementation will be more or less even (depending on how the software is being written this time around).

The impact will typically be that the business will be trading a piece of potentially battle hardened software with known intractable deficiences, with a new piece of software that will both have bugs that will be only be ironed out in the face of production.  Even worse, there will always be a set of intractable deficiencies that are now unknown – only to be discovered when the new software is in production.

When the original system was created, it is highly unlikely that the engineering team baked in a set of annoying deficiencies.  Likewise, the new system will, to the best of your teams understanding,  not baking any deficiencies into the system.  You need to make a conscious decision to take the risk that the new issues will be less painful than the old issues are.  If you can’t make that call, then sometimes refactoring and re-working parts of the system might be a better solution.

 

What have your experiences been with ng-Whatevers?  Have you found that your team can reliably replace an older system with a new system, and see that in a few years time the new system is held with a higher level of esteem than the original system?  Follow this blog for more posts, or post comments below on this topic.

 

Advertisement

Code and the Written Word

Code history is like a narrated history of code.  The ability for git rebase to reorder, rework and polish commits allow a developer (and code reviewers) to curate the code history so that it tells a well structured story.  This post will wander through how strongly the analogy can work.

TL;DR version in the slides.  Read on for the long form.

Continue reading “Code and the Written Word”

Ambiguous Requirements in the Simplest Places (and how to fix it)

Below is a photo from New Mongolian BBQ, a favorite dinner place for the family.  This is a really interesting example of an ambiguous requirement as demonstrated by an ambiguous API.  As part of the instructions at the start of the line, a patron is suggested to use two bowls – one for meat, and one for vegetables.

IMG_20150201_184436When the patron gets to the end of the line for the their Mongolian to be cooked and they are presented with this spot for two sets of waiting customers.  The first question that comes to mind is I have two bowls.

The two immediate options that I see for what this means is

  • Customers front and back, bowl 1 and bowl 2.
  • Customer 1 and customer 2

Judging from the customers choosing randomly from the two options above.  I generally opt for bowl 1/bowl 2 if there aren’t any bowls already up when I arrive.

So how do we take the ambiguous requirement and make it mostly obvious to most patrons?  My suggestion would be to place a thick line to separate the two customer spots.  This would rely on human nature to want to have their bundled things bundled together.  If you look carefully at the picture, this might be the intent since there is already a slightly larger gap between the front and back.

Any other suggestions on how to resolve this ambiguous requirement?  Any similar simple but confounding ambiguous requirements issues that you have found?  Post a comment below.

High Confidence/Low Information vs High Accuracy/Low Information Estimates

Quite often estimates are needed where there is low-information, but a high-confidence estimate is required.  For a lot of engineers, this presents a paradox.

How can I present a high confidence estimate, when I don’t have all the information?

Ironically, this issue is solved fairly easy by noting the difference between high confidence and high accuracy estimate.  A high confidence estimate is defined by likelihood that a task will be completed within a given timeframe, while a high accuracy estimate provides a prescribed level of effort to complete the task.  This article presents a method of balancing a high confidence estimate balancing analysis effort against accuracy.

This is a refinement on the “Getting Good Estimates” posting from 2011.

The Estimation Model

The basis for this method is captured in the diagram below. The key measures on the diagram are

  • Confidence, the likelihood that the task will be completed by a given date (Task will be complete in 15 days at 90% confidence)
  • Accuracy, the range of effort for an estimate (Task will be complete in 10-12 days)
  • No Earlier Than, absolute minimum effort for a task.

Estimate Confidence Accuracy

In general, I never accept a naked estimate of a number of days. An estimate of a range will usually imply a confidence level. An estimate of a confidence level may or may not need an indication of accuracy – depending on context for the estimate.

Gaming out the Estimate

As a refinement to the method outlined in Getting Good Estimates, the same technique of calling out numbers can be used to pull out an estimation curve from an engineer.  The method follows the same iterative method outlined in Getting Good Estimates By asking the question, “What is the confidence that the task would be complete by date xxx?, you will end up with a results similar to

Question Answer
What’s the lowest effort for this task?  2 weeks
What’s the likelihood it will task 20 weeks  100% (Usually said very quickly and confidently)
What’s the likelihood it will take 10 weeks  95% (Usually accompanied a small pause for contemplation)
What’s the likelihood it will take 5 weeks  70% (Usually with a rocking head indicating reasonable confidence)
What’s the likelihood it will take 4 weeks  60%
What’s the likelihood it will take 3 weeks  30%
What’s the likelihood it will take 2 weeks  5%

That line of questions would yield the following graph.

Worked Estimate

I could then make the following statements based on that graph.

  • The task is unlikely to take less than 2 weeks. (No earlier than)
  • The task will likely take between 4 and 8 weeks (50-90% confidence)
  • We can be confident that the task will be complete within 8 weeks. (90% confidence)
  • Within a project plan, you could apply PERT (O=2, M=4[50%], P=8[50%]) and put in 4.3 weeks

Based on the estimate, I would probably dive into the delta between the 4 and 8 weeks. More succinctly I would ask the engineer, “What could go wrong that would cause the 4 weeks to blow out to 8 weeks?”.   Most engineers will have a small list of items that they are concerned about, from code/design quality, familiarity with the subsystem to potentially violating performance or memory constraints.  This information is critically important because it kick starts your Risk and Issues list (see a previous post on RAID) for the project.  A quick and simply analysis on the likelihood and impact of the risks may highlight an explicit risk mitigation or issue corrective action task that should be added to the project.

I usually do this sort of process on the whiteboard rather than formalizing it in a spreadsheet.

Shaping the Estimate

Within the context of a singular estimate I will usually ask some probing questions in an effort to get more items into the RAID information. After asking the questions, I’ll typically re-shape the curve by walking the estimate confidences again.   The typical questions are:

  • What could happen that could shift the entire curve to the right (effectively moving the No Earlier Than point)
  • What could we do to make the curve more vertical (effectively mitigate risks, challenge assumptions or correct issues)

RAID and High Accuracy Estimates

The number of days on the curve from 50% to 90% is what I am using as my measure of accuracy.  So how can we improve accuracy?  In general by working the RAID information to Mitigate Risks, Challenge AssumptionsCorrect Issues, and Manage Dependencies. Engineers may use terms like “Proof of concept”, “Research the issue”, or “Look at the code” to help drive the RAID.  I find it is more enlightening for the engineer to actually call out their unknowns, thereby making it a shared problem that other experts can help resolve.

Now the return on investment for working the RAID information needs to be carefully managed.  After a certain point the return on deeper analysis begins to diminish and you just need to call the estimate complete.  An analogy I use is getting an electrician to quote on adding a couple of outlets and then having the electrician check the breaker box and trace each circuit through the house.   Sure it may make the accuracy of the estimate much higher, but you quickly find that the estimate refinement is eating seriously into the task will take anyway.

The level of accuracy needed for most tasks is a range of 50-100% of the base value.  In real terms, I am comfortable with estimates with accuracy of 4-6 weeks, 5-10 days and so on.  You throw PERT over those and you have a realistic estimate that will usually be reasonably accurate.

RAID and High Confidence Estimates

The other side of the estimate game deals with high confidence estimates.  This is a slightly different kind of estimate that is used in roadmaps where there is insufficient time to determine an estimate with a high level of accuracy.  The RAID information is used heavily in this type of estimate, albiet in a different way.

In a high confidence estimate, you are looking for something closer to “No Later Than” rather than “Typical”.  A lot of engineers struggle with this sort of estimate since it goes against the natural urge to ‘pull rabbits out of a hat’ with optimistic estimates.  Instead you are playing a pessimistic game where an usually high number of risks become realized into issues that need to be dealt with.  By baking those realized risks into the estimate you can provide high confidence estimates without a deep level of analysis.

In the context of the Cone of Uncertianty, the high confidence estimate will always be slightly on the pessimistic side.   This allows there to be a sufficient hedge against something going wrong.

High Confidence Estimate

If there is a high likelihood that a risk will become realized or an assumption is incorrect, it is well worth investing a balanced amount of effort to remove those unknowns.  It tightens the cone of uncertainty earlier and allows you to converge faster.

Timeboxing and Prototypical Estimates

I usually place a timebox around initial estimates.  This forces quick thinking on the engineers side.  I try to give them the opportunity to blurt out a series of RAID items to help balance the intrinsic need to give a short estimate and the reality that there are unknowns that will make that short estimate wrong.  This timebox will typically be measured in minutes, not hours.  Even under the duress of a very small timebox, I find these estimates are usually reasonably accurate, particularly when estimates carry the caveats of risks and assumptions that ultimately are challenged.

There are a few prototypical estimates that I’ve seen engineers give out multiple times.  My general interpretation of the estimate, and what refinement steps I usually take.  These steps fit into the timebox I describe above.

Estimate Style Interpretation Refinement
The task will take between 2 days and 2 months Low accuracy, Low Information Start with the 2 day estimate and identify RAID items that push to 2 months
The task will take up to 3 weeks
Unknown accuracy, no lower bound Ask for no-earlier-than estimate, and identify RAID items.
The task is about 2 weeks Likely lower bound, optimistic Identify RAID items, what could go wrong.

Agree? Disagree? Have an alternative view or opinion?  Leave comments below.

If you are interested in articles on Management, Software Engineering or any other topic of interest, you can contact Matthew at tippettm_@_gmail.com via email,  @tippettm on twitter, Matthew Tippett on LinkedIn, +MatthewTippettGplus on Google+ or this blog at https://use-cases.org/.

Desk-Checks, Control Flow Graphs and Unit Testing

Recently, during a discussion on unit testing, I made an inadvertent comment about how unit testing is like desk-checking a function.  That comment was treated with a set of blank stares from the room.    It looks like desk-checking is no longer something that is taught in comp-sci education these days.  After explaining what it was, I felt like the engineers in the room were having similar moments I had when a senior engineer would talk about their early days with punch cards just after I entered the field. I guess times have changed…

Anyway…

What followed was a very interesting discussion on what Unit Testing is, why it is important and how Mocking fills in one of the last gaps in function oriented testing.  Through this discussion, I had my final Unit Testing light bulb moment and it all came together and went from an abstract best-practice to an absolutely sane and necessary best practice.  This article puts out a unified view on what Unit Testing is, is not, and how one can conceptualize unit tests.

Continue reading “Desk-Checks, Control Flow Graphs and Unit Testing”

Root Cause Analysis; Template and Discussion

A typical interpretation of a Root Cause Analysis (RCA) is to identify parties responsible and apportion blame.  I prefer to believe a Root Cause Analysis is a tool to discover internal and external deficiencies and put in place changes to improve them.  These deficiencies can span the entire spectrum of a system of people, processes, tools and techniques, all contributing to what is ultimately a regrettable problem.

Rarely is there a singular causal event or action that snowballs into a particular problem that necessitates a Root Cause Analysis.  Biases, assumptions, grudges, viewpoints are typically hidden baggage when investigating root causes.  Hence  it is preferable to use a somewhat analytical technique when faced with a Root Cause Analysis.  An objective analytical technique assists in removing these personal biases that make many Root Cause Analysis efforts less effective than they should .

I present below a rationale and template that I have used successfully for conducting Root Cause Analysis.   This template is light enough to be used within a couple of short facilitated meetings.  This contrasts to exhaustive Root Cause Analysis techniques that take days or weeks of applied effort to complete.  In most occasions, the regrettable action is avoidable in the future by making changes that become evident in a collective effort of a few hours to a few days.  When having multiple people working on a Root Cause Analysis, this timebox allows analysis within a day.

Continue reading “Root Cause Analysis; Template and Discussion”

Regression Isolation vs Code Diving

As developers we deal with regressions on a regular basis.  Regressions are changes that are introduced to a system that causes a potentially unwanted change in behaviour.  Engineers, being wired the way they are have a tendency to want to fix first, understand later (or understand as part of the fix).  In a large number of cases however, it is considerably more effective to isolate and understand the cause of the regression before even diving into the code to fix it.

This is a continuation of a series of blog postings I am making on regression isolation  and bisection, the first of which was  “A Visual Primer on Regression Isolation via Bisection”.  If bisection and regressions are terms that you don’t solidly understand, I strongly suggest you read the primer.

Continue reading “Regression Isolation vs Code Diving”

A Visual Primer on Regression Isolation via Bisection

Identifying regressions via bisection is one of those software debugging techniques that I find under utilized and under appreciated in the software industry.  Bisection can be used to isolate changes in anything from BIOS updates to software updates to source code changes.  This article provides a backgrounder on what bisection is, and how it is useful in identifying points where a regression has been introduced.

This is the first in a set of three posts covering regressions.

Continue reading “A Visual Primer on Regression Isolation via Bisection”

Updates on “Getting Good Estimates”

This posting is an update to the Getting Good Estimates article based on the comments received and further research from a number of sources. I include discussion on who should do the estimate, what’s included, references to other estimation techniques, refinements on the probabilistic estimation curve with contrasts to PERT and other techniques.  New discussion is had on the “doubling of estimates”, effort vs calendar time,  the funnel of uncertainty and finally thoughts on the experience level for shaping estimates against the engineer’s experience.

UPDATE: Improved approach to estimation in this posting.

Who Does the Estimate?

The first comment I saw was from logjam over at hacker news who posted the following (emphasis is mine):

When managers request software estimates from engineers, engineers should frown, look them dead in the eyes, and tell them that making estimates is a managerial/administrative task.

Interestingly is the polar opposite to Joel on Software

Only the programmer doing the work can create the estimate. Any system where management writes a schedule and hands it off to programmers is doomed to fail. Only the programmer who is going to implement a feature can figure out what steps they will need to take to implement that feature.

I tend to agree with Joel on this one.  The person solving the problem is in the best position to determine how long a particular task will take.  Moving away from software, people get very frustrated with cookie-cutter estimates from tradespeople, independent of the actual effort associated with the problem.

The estimate is not a negotiated value between the engineer and the manager, it is instead a shared consensus on the effort of the task.  The engineer’s responsibility is to integrate the assumptions, risks and their individual capability into an estimate.  The manager’s responsibility is to provide the bigger picture to provide the information needed for the engineer to integrate, and the commitment to

Another point that was brought up by my friend and colleague, Piranavan, is that estimates should be tempered by the individual’s strengths and experience.  An architect who knows the system through and through will usually be able to deliver a task within a considerably shorter time than an intern that is new to a system.  This underscores what Joel and I mention above.  The estimate should really come from the person doing the work.  It can be workable to have experienced engineers create estimates, but before the estimates become plans of record, the person making the estimate needs to temper the estimate with the individual doing the work.

What’s Included in an Estimate?

Immediately below the statement from Joel on Software, there was the following statement.

Fix bugs as you find them, and charge the time back to the original task. You can’t schedule a single bug fix in advance, because you don’t know what bugs you’re going to have. When bugs are found in new code, charge the time to the original task that you implemented incorrectly. This will help EBS predict the time it takes to get fully debugged code, not just working code.

This concurs with what I look for in estimates.  Completed work, done… Done, done…  Hands off keyboard, done…  Delivered with minimal bugs,  done…

I look the engineer dead in the eye, ask them to put their hand on their heart confirm that their estimate includes all work that’s needed for the task to be complete.  Most engineers will pause and possibly realize that there is other work or unconsidered risks that might affect the estimate.

The intent isn’t to beat the engineer up, the intent is to dig down and expose any assumptions, concerns or other issues that may affect the estimate.  Remember that the captured form of the estimate is either an explicit range or a single effort value with confidence interval applied

What Other Estimation Techniques Are There?

There are obviously many different techniques that can be used for estimations.  A google query for “Software Estimation” yields 31,400,000 results. 10 pages worth of results in,

There are many, many different methods.  Here are a couple of interesting and accessible ones.

Planning Poker is a group consensus system.  There is a group discussion on the details regarding the task, and then everybody creates their estimates.  These are then combined to determine a group estimate.

Evidence Based Scheduling by Joel Spolsky in 2007 predicates estimations down to less than 16 hours.  This forces a level of design as part of the estimate.  The estimates are not trusted until they get down to that timeframe.  I’d imagine that the estimate/design is revised and improved over time.  Jump down to the uncertainty funnel below for a discussion there.

Probabilistic Evaluation and Review Technique (or PERT) for short, provides a full system for estimation.  The methodology also takes the probabilistic estimation curve and boils it down to 3 points on the curve: the optimistic, most likely and pessimistic.  These are then calculated into a single estimate as shown below.

Probabilistic Estimation Curve

One of the key parts of the previous post presented the characteristic curve. As part of the research for this update post, I saw the curve in multiple places from papers on terminology to more NASA handbooks on estimation. The research provided a lot more nuance to estimation.  It is also referenced heavily as the basis for the 3 point estimation technique used in PERT.

Although I haven’t confirmed, I believe that the probability function that closely matches this shape of a particular beta distribution

Refreshing with the graph.

Notice that I’ve marked the three critical parts.

Absolute Earliest The absolutely earliest that the task could be completed. This is assuming perfect understanding of the task and no unrealized risks.  Basically the impossible estimate.  Way too many estimates are based on this value.
Highest Confidence (engineer’s estimate) This represents the highest confidence, and the likely point at which the task will be completed.
Mean (planning estimate) This represents the mean of the estimate.  I’ll dig deeper into this shortly.

PERT provides a basis for determining the estimate based on the formula

Estimate = Mean = (optimistic + 4*Likely + pessimistic)/6.

This of course assumes that the least likely estimate is captured as a number, which in a lot of cases is quite hard to do.

Padding Estimates

If you’ve been in software engineering for a while, you probably have heard someone say “Take the estimate and double it”.   The paper by Grimstad et al actually positions this in context.  They make a similar explicit observation that the estimates for any task have a probabilistic shape with the two critical points.  The highest confidence and the mean.

These two points carry particular value and should be used in two different scenarios.  The highest confidence should be used by the engineering in tempering and improving their estimation.  The mean should be used within the project management team to determine a likely cost or planned effort for the project.   Both are rooted in the same estimation but are derived differently.

The estimation doubling is triggered by a gross simplification of the estimation process.  Simplifying the estimation to a single scalar value from a probabilistic range makes it easier to aggregate numbers, however the aggregation should be the mean rather than the highest confidence.  If you conflate the two values together you will end up with poor overall planned effort.  Remember that the engineers will optimistically provide something between the absolute earliest and their highest confidence estimate so this is generally the number used as a scalar estimate and hence as the basis for estimate doubling.

Since there is the tendency to use the highest confidence estimate as a basis for planning and these estimates will be typically be lower than the planned effort we end up with a shortfall.  To recover from this shortfall, the simplest model is to use an arbitrary multiple.  Falling back to our probabilistic model, we see that the mean is a non-linear distance from the highest confidence estimate.  The management of risks and unknowns shape the confidence associated with a task.

A well understood task may have a small difference between the absolute minimum, the highest confidence and the mean, a poorly understood task will have a greater spread.  The size of the task (or the absolute minimum) carries no direct relationship to the spread of the estimates.

This removes the “double the estimate” for the purpose of planning.  The use of the more nuanced mean or planning estimate should be used instead.  Or put differently, the factor by which the engineers estimate is transformed into the planning estimate is proportional to the level of risk and number of unknowns.  The higher the level of understanding of the risks and issues for a task, the lower the multiple should be.

Of course if this means that doubling the estimate may make sense in some environments, particularly when the estimate carries a lot of unknowns and is known to be  optimistic or  has not been tempered by the sorts of discussions suggested in the original article.

The Uncertainty Funnel

An implication of the shaping and discovery process I described earlier is that over time the estimates become more accurate as more information is discovered and as the project continues.

A number of papers show this in different forms.  Page 7 of the NASA Handbook of Software Estimation shows a stylized funnel, and page 46 of Applied Software Project Management (physically page 15 in the Chapter 3 PDF referenced) book shows an iterative convergence of estimations in its discussion of the Delphi Estimation model.

Both these reference re-iterate that estimates are not static.  Estimates should be revisited and re-validated at multiple  stages within a project.  New information, changes in assumptions and changes in risk profiles will shape the estimate overtime.  I’d also suggest that the engineers quickly do a sanity check on the estimates they are working against before starting work on a new task.  The estimate will generally improve over time as the risk discovery, problem understanding and task detail awareness increase the accuracy of the estimate.

Visionary Tools provides an interesting observation that if you don’t see estimate uncertainty reducing over time, it is likely that the task itself is not fully understood.

Interdependencies & Effort vs Wall Time.

Piranavan highlighted another area that I had left ambiguous.  The discussion on estimates is focused on looking at the particular effort associated with a singular task.  It does not expand into managing interdependencies and their effect on meeting an estimate.  For the purposes of an estimate in these articles, it is the time applied to delivering the estimate.  The external factors such as interdependencies, reprioritization, etc should not affect the value of the estimate. Here I do see the prime value of the manager being in the running defence for the engineer and ensuring that they have the capability and focus to successfully deliver the work with minimal interruption or distraction.  This may mean delaying the delivery of the work, or assigning other work elsewhere.  Always remember that sometimes you need to take the pills and accept late deliver.

A further subtlety on the interdependencies is that if an interdependent task is not well-defined or delivered cleanly or completely, that may force rework or rescoping of tasks and consequently it is likely that these external factors will inject bumps into the uncertainty funnel.

On Feedback Cycles and Historical References

logjam made the following point:

Managers should collect, maintain, and have access to substantial historical data upon which they can make estimates and other administrative trivia. What else are managers for? Of course, what engineers need to understand is the game at work here: making an estimate is primarily about making you commit to a date, with which you will be flogged by those asking for such an estimate.

Piranavan wrote

I also think that estimating work is something that needs to be adopted in a weekly cycle. Capturing the changing estimates are important to understand that things are changing and need to be accounted for as well as a strong feedback tool for engineers to understand where previous estimates went awry (estimates vs actual). It also gives managers a chance to understand how close engineers were and whether or not that was an estimation error or an outlier (external priority change for example).

Whilst I don’t agree with logjam’s assertion regarding managers making estimations in isolation of the engineers, both of the responding comments point out the need to capture, manage and maintain estimates throughout the life of a project, and if possible educate the engineer on how to improve the accuracy of their estimates.  That’s a topic for a later article.

Comments, suggestions or pointers are welcome below.

Getting Good Estimates

Good estimates are hard to come by.  They are typically too optimistic or too pessimistic or aren’t grounded in reality.  Here is my approach to effort estimation.  I’ve used it successfully in a number of roles and have seen engineers go from poor to reasonable to good estimators.

UPDATE: I have gathered some thoughts and comments and included them in this update.

UPDATE 2: I have an update on the methodology and some further insights in this blog post.

What I look for in Estimates

Typically when asked for an estimate, you will get a single value with no qualification.  “The work will take 3 weeks”.  Experience has shown me that the a single value implies a lack of understanding the nuance of the problems and issues that the task might have lurking just below the surface.

When asking for an estimate, I’m looking for two things.  1) A baseline effort, and 2) A confidence interval.  This comes in one of two forms

  1. 4 weeks of effort with 60-70% confidence
  2. 3-6 weeks of effort

Both these values are effectively the same.  I let the engineers choose which ever one they are comfortable with.

Characteristic Curve

I can’t recall when I began to understand the characteristic curve within the methodology I use for engineering. I’d say that a long-term colleague Larry Bonfada was a strong influence in the thought process and I have since seen similar characteristic curves in Waltzing with Bears: Managing Risk on Software Projects by Tom DeMarco and Timothy Lister.  I don’t have sufficient a background in statistics to define the shape.  Feel free to leave a comment to educate me on the distribution type.

The critical sections of the curve in the table below.

Section

Description

Confidence

Absolute Earliest

The absolute earliest date that the task can be complete.

0%

Highest Confidence

The date that represents the highest likelihood of being delivered on or around.

60%

Long Tail

Worst case scenarios, if things go wrong, this date will be hit.

<10%

Typically engineers will choose one of those sections for their estimates.  Optimists will communicate the absolute earliest date, pessimists will go for the long tail and your more experienced realists will go for the point of highest confidence – somewhere in the middle.

Shaping the Curve

Quite possibly you are thinking that to get this curve you have to apply painful or difficult to use models; fortunately, it’s not rocket science.  Most engineers actually have a strong gut feel for the shape of the curve, so it’s a matter of teasing out a good estimate.

The way it works is through a set of questions to the person providing the estimate.

Question

Answer

What’s the lowest effort for this task?

 2 weeks

What’s the likelihood it will task 20 weeks

 1%

What’s the likelihood it will take 10 weeks

 5%

What’s the likelihood it will take 5 weeks

 30%

What’s the likelihood it will take 4 weeks

 50%

What’s the likelihood it will take 3 weeks

 60%

I intentionally use an number of extreme points (10,20 weeks) to drive the shape of the curve.  When graphed, it comes up similar to below.

I find that most engineers will naturally have a strong gut feel for the estimates and in the majority of times will give numbers that result in more or less the same shape.

Now of course, there are a class of engineers who are either so cautious that they always estimate in the long tail – or too optimistic (or naive to the real effort) that they will always resist this sort of analysis.  My advice is to push through with them (or at least work out a way to interpret their estimates).

From the answers to the questions in the examples above, I’d walk away with either of the agreed to estimates of 2½ – 4 weeks of effort or 3 weeks with 60-70% confidence.   Each team or organization will have it’s own sweet spot of acceptable range.  Tightening and getting the estimates to the right shape, usually involves a mixture of analytics and soft management skills.

Tightening the Curve by Managing Unknowns

The uncertainty in the curve is representative of a number of different factors, be it experience, unknown complexity, inter-dependencies and so on.

A hallmark of a large amount of unknowns in this sort of analysis is overtly broad ranges.  I’ve had engineers give a range of 2 weeks to 3 months.  Obviously the estimate isn’t workable by any stretch of the imagination.  The engineer in this case is either being obstructionist or hasn’t, or isn’t willing, to look at the unknowns that would drive such a broad estimate range.

The types of questions that I tend to ask the person giving the estimate will be along the lines of

  1. What could happen that will prevent the absolute earliest time from occurring?
  2. What could happen that would push you from the 60% confidence date to a later date?

As each of these questions are answered with the unknown factors becoming more visible, you can revisit the original estimating questions again after those factors have been determined.  If you are lucky there are some factors that are either issues that can be dealt with or risks that can be mitigated or removed.  In addition, it is worthwhile and discover these issues and risks and have them tracked formally as part of the greater project.

I generally find repeated cycles of this sort of analysis serve to improve the estimates to the point where I am comfortable to accept the estimate into the project.  With each iteration the discovery process either moves the overall curve to the left or the right (smaller or large effort) or tightens the shape of the curve (increasing the confidence).

Feel free to provide feedback below on how you deal with estimates.

%d bloggers like this: