Playbook Lean Agile Project Management

8 Ways to Decrease Risk in Project Decisions

Written by Eric Graves | January 8, 2019 6:30:00 PM Z

So how do you increase certainty in project decisions. In part we can increase certainty by increasing the accuracy of our impact estimates. 

Small differences in our Economic Variable (impact) estimates sometimes change our decision, so accuracy in these estimates is important. The more accurate we can be, the more profitable our decisions will be. We don’t need to be perfect every time to become more profitable overall. There are a number of ways to increase the average profitability of our decisions.

  1. Make the safe bets

    To achieve the best results, we combine economic analysis with our intuition. We will naturally hesitate more when there are bigger risks and costs involved, and we will naturally choose to execute decisions we have greater confidence in. Probability Analyses help us see these risky and costly situations clearly.

    To control risk and reward explicitly, we develop decision-making strategies that clearly limit our risk. For example, we can establish a safety factor by choosing to pursue only those decisions which have a >$500K Expected Value and/or >X% ROI. We can also establish spending limits, allowing low-expense decisions to move forward quickly, where high-expense decisions are held up for more scrutiny.

    An additional strategy is to base decisions on worst-case expectations, or almost worst-case conditions such as midway between mean and worst-case (see previous post). Often, even in these pessimistic conditions, we find the analysis leads us to a different (and better) decision than our intuition would. Each of these safe decisions increases our overall profitability.

  2. Avoid guesstimates

    As shocking as it may seem, we have come to the realization that, as individuals, we are not very good at predicting the future. Even so-called experts are not very good at predictions based on gut-feel alone. For example, if I remember correctly, 4 out of the 5 Monday Night Football (MNF) announcers ended this season 8 and 9 in their predictions. An octopus could do as well. I’m not able to confirm, as I cannot find the data. I suppose it isn’t surprising that their inability to predict winners is not on the MNF home page.

    The same is true in politics, baseball scouting, stock markets, and many, many other complicated and complex systems. Our minds are biased and even the experts aren’t able to process all of the available data in their heads and guts. Again, I’ll refer you to ‘The Signal and the Noise’ by Nate Silver and ‘The Wisdom of Crowds’ by James Surowiecki for more on the topic.

    Sometimes we have no choice but to use guesstimates, and certainly an economic analysis based on guestimates is better than no economic analysis at all. It is fine to use them in a first pass, to get in the right ball park or quickly pick out the safe bets. However, especially where an incorrect value would cost us much, guestimates should not be considered our final answer unless we have no means to measure.

  3. Base estimates on real data

    One critical way to increase accuracy in our impact estimates is to be data driven. There are many cases where our past can help us better predict the future. I had a good example come up just this past week when I needed to estimate a development task. I originally estimated 1 day, based on gut-feel and my memory of effort required.

    Then it occurred to me that we have done this before and the data was at my fingertips. I pulled up Playbook and within a minute I saw that my estimate was 100% wrong. That is, the task actually averaged two days over the seven data points we had. In fact the minimum was one day and it was as high as four days.

    My memory and gut-based estimate (guestimate) was at the minimum. Expecting the ideal case is human nature. Our memory of the past is clouded by our natural tendency to remember the good and not the bad, and we often think “we’ll get ‘em next time.” (Any fellow Cubs fans out there?)

    When we actually completed the task, it came out at two and a half days – much closer to the two days that was the average. Was it a coincidence that it was so close to the average? Yes. But it was absolutely not a coincidence that it was more than my one day estimate. Almost all of the actual occurrences took longer than my estimate, which is so often the case.

  4. Recognize, accept and counteract natural bias

    As mentioned above, we are naturally biased toward the ideal, the optimistic, “the happy path,” (as it’s called by one director of development we work with). When we recognize this fact and accept that even our experts’ brains are unable to see and assimilate all of the information perfectly, then we can go seek some actual data without feeling discredited. I can freely admit my estimates might be way off, and, honestly, doing so is much easier and more profitable than believing they are accurate.

    This is a theme we see most recently in Lean Startup though it has threads back to the beginning of Lean. Even the experts get things wrong often enough that we are better off basing our decisions on real data, generated as quickly as possible. We accept our imperfections, we embrace them, and then we go learn.

  5. Build better project models

    Other important tools for good input estimates are good, predictive models of our projects. The most common and important form of these in hardware development companies is a project schedule. Like a finite element model predicts the impact of a load on our product design, our project schedule attempts to predict the impact of a change on our launch date. However, project schedules are typically very poor models of our project. The list of deficiencies is quite long so I won’t bother listing them all right now.

    Project schedules can be good models if they accurately predict the work required based on actuals from the past and include accurate and adequate resource availability. They can be good models if they are up-to-date and correctly reflect the actual criticality of each resource and activity. And they can be good models if good Project Risk Management is integrated into the plan and into the overall process. (Not just any Project Risk Management -- good Project Risk Management.)

  6. Incorporate an optimism factor in your work estimates

    It is difficult to predict perfectly all of the work we will need to do in a project or activity, especially when doing something new. Some work which could be predicted isn’t, and usually some work is simply unpredictable. Actual work and duration consistently exceed our predictions, even for the best laid plans. Expecting perfect plans is a recipe for disappointment and lost profits. Expecting them to be imperfect is the better option.

    Basing decisions on predicted work without applying any optimism factor will result in a consistent underestimate for work, which will sometimes result in decisions to work on things we should forgo. To determine our Optimism Factor, we first take a baseline of the predicted work in our project or activity and assess how risky the project/activity is (how uncertain the work is). Then we measure actual work and compare it to the baseline.

    For example, we can determine that actual work averages 20% higher than predicted on medium-risk projects, and apply that conversion to future work estimates on similarly risky projects. Optimism Factors can also be determined for unit cost and expense predictions.

  7. Use proxy variables to estimate sales impact

    Proxy Variables, such as Story Points and Points Burn Rate used in Agile Software Development, are another good way to estimate impact. For those not familiar, I’ll summarize here:

    When asked how long a feature will take, some Agile Software teams assign a point value. The point value is a subjective, gut-feel representation of how much work is expected and how risky it seems. Then, over a few weeks, we measure how many points worth of features the team produces to calculate the average Points Burn Rate. With this we can predict the impact of the next new feature. For example, if the new feature is rated at 40 points and the team produce about 20 points a week, the new feature would add about two weeks.

    With Proxy Variables, we are able to combine multiple factors into a single rating. For example, points in Agile Software Development combines riskiness and expected work into the points ratings, and accounts for resource availability in the Burn Rate. There are many pros and cons to Story Points, and similar can be said for Proxy Variables in general.

    There is much debate in the world of Agile Software Development about whether Story Points and Points Burn Rate are the best option. I would like to throw my view into the ring but this post is long enough already, so I will save that for another day. I will simply say that Proxy Variables are best used when we cannot actually measure the individual variables. But where we can measure the individual variables (like predicted work, unpredicted work, and availability), we get a better estimate. And for duration estimates, we can measure the individual factors.

    However, for sales volume and pricing impact estimates, I believe proxy variables can produce more accurate estimates. For example, we could ask people within the organization and/or our customers to rate the value of a new feature on a scale from 1 to 5. Then, we measure the actual sales impacts we attain and establish a “% volume increase as function of predicted value rating”. We must be careful to avoid Vanity Metrics (see Lean Startup) and drill down to the lowest level we can measure in order to get good numbers.

    This method works better when collective ratings are used, rather than the ratings of a few individuals, as I touched on earlier. The Wisdom of Crowds is powerful, but only if the crowd is large enough, with enough variety and independence in it. Better yet, use Lean Startup methods to gather objective customer feedback without them even knowing it.

  8. Have a predictable, lean, smooth flowing system

    The last, key ingredient to accurate impact estimates and confident economic decisions is a smooth flowing, predictable product development system. We use economics and COD to first make it smooth and fast, and then we use economics and COD to keep it that way. In turn, this makes our analyses more correct and more profitable. It is a self-reinforcing loop, and one of the closed-loops we need to control our system and keep it flying in the right direction.

Easier said than done?

Yes, I suppose some of these are easier said than done. However, the tool you use can help (or hurt) a lot. If your tool is PLAYBOOK, you can easily capture and track actual work and tasks, in detail, and reuse this information to build better plans and make better decisions in the future. You can also easily measure true availability of each resource, and use that to better control and predict the future. The list goes on, but I'll just leave it at that.

Summary

The more data-driven we can be, and the better our predictive models, the more profitable our decisions will be. With a better understanding of resource throughput from Theory of Constraints, Queues from Flow, and our customers from Lean Startup, and the improved control we gain from Lean, Agile, and Next Generation product development, we are building better models and becoming more data-driven every day. Combined with a little practice, we can reap the enormous benefits of Economic Decision Making.

We build this capability one step at a time, and there is no time like the present to get started. Continue with us for the last post in this series where we will review some important steps and factors in implementing Economic Decision Making in your organization.

-----

Ready to create your project economic model to make profit driven project decisions? Download the free spreadsheet.

  

Related Posts

Introduction to Cost of Delay

What is Cost of Delay?

Cost of Delay: How to Calculate It

Cost of Delay and Project Modeling

Cost of Delay Project Model Examples

Cost of Delay Project Modeling Risk

Cost of Delay and Strategic Advantage

Cost of Delay: Project decisions based on profit

8 Ways to Decrease Risk in Project Decisions

14 Tips for Calculating Cost of Delay

WSJF and How to Calculate It

Don Reinertsen on cost of delay

Wikipedia on cost of delay

Guide to Cost of Delay

Editor's note: This post was originally published in 2015 and has been updated for accuracy and comprehensiveness.