Playbook_Logo.png
Free Trial
Watch Demo

Eric Graves

Aerospace and mechanical engineer turned NPD systems engineer, Eric spends his time engineering better product develop systems, using PLAYBOOK as his tool of choice!

Search Result

Posts containing:

Cost of Delay: 8 ways to increase certainty in new product development decisions (part 9)

Eric Graves- 03/1/17 06:30 PM

This is Part 9 in a 10-part series of posts discussing the Cost of Delay and making objective decisions in new product development, based on economic analysis. 

In this part we discuss how to increase certainty in our economic analysis regarding project trade-offs, in part by increasing the accuracy of our impact estimates.

Understanding the cost of delaying your product launch is key to maximizing profit.

In this part we discuss how to increase certainty in our economic decisions, in part by increasing the accuracy of our impact estimates. In order to get the most out of this post, please read Calculating Economic Value to Increase Profit (Part 7) and Value and Risk in New Product Development Economic Decisions (Part 8).

Why do we care about accuracy in economic variable estimates?

In order to demonstrate the importance of certainty in our decisions and impact estimates, let’s review what happened when our inputs varied in the example we used previously. In this example, our decision is whether to add a new feature to our product, which we initially estimated would increase sales volumes by 5% and delay launch by one month.

We initially calculated the impact (Expected Value) of adding the new feature to be about $300K in additional profit.

5% * $150K per % - 4.3 weeks * $100K per wk – $16K
= $304K

Based on the significantly positive impact on profit, we would decide to move forward with the feature. However, once we refined our input estimates, we determined that we would only see a 3% increase in sales volume, resulting in essentially no additional profit.

3.1% * $150K per % - 4.3 weeks * $100K per week - $16k expenses
= $19K

Additionally, if our one-month delay is closer to the worst case of two months, even a 5% sales increase would not get us to our break-even point.

5% * $150K per % - 8 weeks * $100K per week – $16K
= - $66K

In fact, if we only achieve a 3% sales gain with a two month delay, we lose about $365K in profit.

3% * $150K per % - 8 weeks * $100K per week – $16K
= - $366K

Seeing the lack of profitability with this decision, we would decide against the new feature.

In short, small differences in our Economic Variable (impact) estimates sometimes change our decision, and accuracy in these estimates is important. The more accurate we can be, the more profitable our decisions will be, and we don’t need to be perfect every time to become more profitable overall. There are a number of ways to increase the average profitability of our decisions.

You can also view our complete guide to cost of delay here.

View My Guide to Cost of Delay

1. Make the safe bets

To achieve the best results, we combine economic analysis with our intuition. We will naturally hesitate more when there are bigger risks and costs involved, and we will naturally choose to execute decisions we have greater confidence in. Probability Analyses help us see these risky and costly situations clearly.

To control risk and reward explicitly, we develop decision-making strategies that clearly limit our risk. For example, we can establish a safety factor by choosing to pursue only those decisions which have a >$500K Expected Value and/or >X% ROI. We can also establish spending limits, allowing low-expense decisions to move forward quickly, where high-expense decisions are held up for more scrutiny.

An additional strategy is to base decisions on worst-case expectations, or almost worst-case conditions such as midway between mean and worst-case (see previous post). Often, even in these pessimistic conditions, we find the analysis leads us to a different (and better) decision than our intuition would. Each of these safe decisions increases our overall profitability.

2. Avoid guesstimates

As shocking as it may seem, we have come to the realization that, as individuals, we are not very good at predicting the future. Even so-called experts are not very good at predictions based on gut-feel alone. For example, if I remember correctly, 4 out of the 5 Monday Night Football (MNF) announcers ended this season 8 and 9 in their predictions. An octopus could do as well. I’m not able to confirm, as I cannot find the data. I suppose it isn’t surprising that their inability to predict winners is not on the MNF home page.

The same is true in politics, baseball scouting, stock markets, and many, many other complicated and complex systems. Our minds are biased and even the experts aren’t able to process all of the available data in their heads and guts. Again, I’ll refer you to ‘The Signal and the Noise’ by Nate Silver and ‘The Wisdom of Crowds’ by James Surowiecki for more on the topic.

Sometimes we have no choice but to use guesstimates, and certainly an economic analysis based on guestimates is better than no economic analysis at all. It is fine to use them in a first pass, to get in the right ball park or quickly pick out the safe bets. However, especially where an incorrect value would cost us much, guestimates should not be considered our final answer unless we have no means to measure.

3. Base estimates on real data

One critical way to increase accuracy in our impact estimates is to be data driven. There are many cases where our past can help us better predict the future. I had a good example come up just this past week when I needed to estimate a development task. I originally estimated 1 day, based on gut-feel and my memory of effort required.

Then it occurred to me that we have done this before and the data was at my fingertips. I pulled up PLAYBOOK and within a minute I saw that my estimate was 100% wrong. That is, the task actually averaged two days over the seven data points we had. In fact the minimum was one day and it was as high as four days.

My memory and gut-based estimate (guestimate) was at the minimum. Expecting the ideal case is human nature. Our memory of the past is clouded by our natural tendency to remember the good and not the bad, and we often think “we’ll get ‘em next time.” (Any fellow Cubs fans out there?)

When we actually completed the task, it came out at two and a half days – much closer to the two days that was the average. Was it a coincidence that it was so close to the average? Yes. But it was absolutely not a coincidence that it was more than my one day estimate. Almost all of the actual occurrences took longer than my estimate, which is so often the case.

4. Recognize, accept and counteract natural bias

As mentioned above, we are naturally biased toward the ideal, the optimistic, “the happy path,” (as it’s called by one director of development we work with). When we recognize this fact and accept that even our experts’ brains are unable to see and assimilate all of the information perfectly, then we can go seek some actual data without feeling discredited. I can freely admit my estimates might be way off, and, honestly, doing so is much easier and more profitable than believing they are accurate.

This is a theme we see most recently in Lean Startup though it has threads back to the beginning of Lean. Even the experts get things wrong often enough that we are better off basing our decisions on real data, generated as quickly as possible. We accept our imperfections, we embrace them, and then we go learn.

5. Build better project models

Other important tools for good input estimates are good, predictive models of our projects. The most common and important form of these in hardware development companies is a project schedule. Like a finite element model predicts the impact of a load on our product design, our project schedule attempts to predict the impact of a change on our launch date. However, project schedules are typically very poor models of our project. The list of deficiencies is quite long so I won’t bother listing them all right now.

Project schedules can be good models if they accurately predict the work required based on actuals from the past and include accurate and adequate resource availability. They can be good models if they are up-to-date and correctly reflect the actual criticality of each resource and activity. And they can be good models if good Project Risk Management is integrated into the plan and into the overall process. (Not just any Project Risk Management -- good Project Risk Management.)

6. Incorporate an optimism factor in your work estimates

It is difficult to predict perfectly all of the work we will need to do in a project or activity, especially when doing something new. Some work which could be predicted isn’t, and usually some work is simply unpredictable. Actual work and duration consistently exceed our predictions, even for the best laid plans. Expecting perfect plans is a recipe for disappointment and lost profits. Expecting them to be imperfect is the better option.

Basing decisions on predicted work without applying any optimism factor will result in a consistent underestimate for work, which will sometimes result in decisions to work on things we should forgo. To determine our Optimism Factor, we first take a baseline of the predicted work in our project or activity and assess how risky the project/activity is (how uncertain the work is). Then we measure actual work and compare it to the baseline.

For example, we can determine that actual work averages 20% higher than predicted on medium-risk projects, and apply that conversion to future work estimates on similarly risky projects. Optimism Factors can also be determined for unit cost and expense predictions.

7. Use proxy variables to estimate sales impact

Proxy Variables, such as Story Points and Points Burn Rate used in Agile Software Development, are another good way to estimate impact. For those not familiar, I’ll summarize here:

When asked how long a feature will take, some Agile Software teams assign a point value. The point value is a subjective, gut-feel representation of how much work is expected and how risky it seems. Then, over a few weeks, we measure how many points worth of features the team produces to calculate the average Points Burn Rate. With this we can predict the impact of the next new feature. For example, if the new feature is rated at 40 points and the team produce about 20 points a week, the new feature would add about two weeks.

With Proxy Variables, we are able to combine multiple factors into a single rating. For example, points in Agile Software Development combines riskiness and expected work into the points ratings, and accounts for resource availability in the Burn Rate. There are many pros and cons to Story Points, and similar can be said for Proxy Variables in general.

There is much debate in the world of Agile Software Development about whether Story Points and Points Burn Rate are the best option. I would like to throw my view into the ring but this post is long enough already, so I will save that for another day. I will simply say that Proxy Variables are best used when we cannot actually measure the individual variables. But where we can measure the individual variables (like predicted work, unpredicted work, and availability), we get a better estimate. And for duration estimates, we can measure the individual factors.

However, for sales volume and pricing impact estimates, I believe proxy variables can produce more accurate estimates. For example, we could ask people within the organization and/or our customers to rate the value of a new feature on a scale from 1 to 5. Then, we measure the actual sales impacts we attain and establish a “% volume increase as function of predicted value rating”. We must be careful to avoid Vanity Metrics (see Lean Startup) and drill down to the lowest level we can measure in order to get good numbers.

This method works better when collective ratings are used, rather than the ratings of a few individuals, as I touched on earlier. The Wisdom of Crowds is powerful, but only if the crowd is large enough, with enough variety and independence in it. Better yet, use Lean Startup methods to gather objective customer feedback without them even knowing it.

8. Have a predictable, lean, smooth flowing system

The last, key ingredient to accurate impact estimates and confident economic decisions is a smooth flowing, predictable product development system. We use economics and COD to first make it smooth and fast, and then we use economics and COD to keep it that way. In turn, this makes our analyses more correct and more profitable. It is a self-reinforcing loop, and one of the closed-loops we need to control our system and keep it flying in the right direction.

Easier eaid than done?

Yes, I suppose some of these are easier said than done. However, the tool you use can help (or hurt) a lot. If your tool is PLAYBOOK, you can easily capture and track actual work and tasks, in detail, and reuse this information to build better plans and make better decisions in the future. You can also easily measure true availability of each resource, and use that to better control and predict the future. The list goes on, but I'll just leave it at that.

Summary

The more data-driven we can be, and the better our predictive models, the more profitable our decisions will be. With a better understanding of resource throughput from Theory of Constraints, Queues from Flow, and our customers from Lean Startup, and the improved control we gain from Lean, Agile, and Next Generation product development, we are building better models and becoming more data-driven every day. Combined with a little practice, we can reap the enormous benefits of Economic Decision Making.

We build this capability one step at a time, and there is no time like the present to get started. Continue with us for the last post in this series where we will review some important steps and factors in implementing Economic Decision Making in your organization.

Want to learn more about how to calculate cost of delay? Download the eBook, "Profit Driven Development and Cost of Delay: A Guide for Decision Makers."

Download my eBook

  • Part 1, How can the Cost of Delay improve product development velocity and increase profits?

  • In Part 2, What is the definition of Cost of Delay?

  • Part 3: How to quickly estimate the Cost of Delay.

  • Part 4: How to quickly estimate the other terms generally required to create a project economic model.

  • Part 5: Examples of how to calculate the cost of delay and the subsequent ROI of making some common project trade-offs.

  • Part 6: Important factors to consider when implementing a project economic model and how to manage uncertainty of model inputs. 

  • Part 7: How to calculate the economic value of a decision quantitatively.

  • Part 8: How to communicate and account for uncertainty in each decision's impact on launch date, sales volume, and the other Economic Variables.

  • Part 9: How to increase certainty in our economic decisions, in part by increasing the accuracy of our impact estimates.

  • Part 10: Critical success factors for Implementing Project Economic Modeling and Economic Decision Making.

Related Posts: 

WSJF and How to Calculate It

WSJF and Architectural Runway

Don Reinertsen on cost of delay

Wikipedia on cost of delay

Editor's note: This post was originally published in 2015 and has been updated for accuracy and comprehensiveness.