Proving Things Work in Business – It’s Not That Hard

Start out by …

  • Realizing most business managers didn’t get to where they are on “feelings and thinking.” They are numbers people. If you provide them with “ideas” without proof that your ideas will produce measurable results, they will not listen to you. Although they might say they are risk takers (actions speak louder than words) they want to take calculated risks. That need proof that what you propose has worked, or that something similar (each of the individual components) has worked in the past.
  • Knowing the difference between a simple correlation (good) and proving cause and effect (better)
  • Defining your terms in painful detail (using numbers) to ensure we are all talking about and comparing precisely the same thing
  • Knowing that no matter how much work you put in to benchmarking if what you propose isn’t much different from the competitor it doesn’t provide competitive advantage
  • Knowing when you provide any number without other numbers to compare it to you are not providing anything of value
  • Understanding that if you can’t show the process used is logical (in a process map)…you can’t prove anything

Dumb things commonly put forward as proof (don’t fall into the trap)

  • Common usage (or everyone uses it).

  • Anything that uses only “words” and presents no data

  • Articles that say it has worked elsewhere
  • Experts that say something works without data to show their accuracy or track record
  • Any statement that begins with “I believe, I feel, or I think”
  • Opinions are not facts. Opinions not gathered in a scientific manner from experts with a proven track record are worthless
  • Surveys of people that use it (and say it works) are generally useless because few will acknowledge what they do is dumb. Results (data) from a cross section of users may be proof but responses to a survey are to be considered opinions from unproven amateurs
  • Vendors who say it works (with or without data) are suspect, even with data, if the data is from less than 10% of the users
  • Textbooks or professors who say it works are likely to be out of date
  • Anecdotal evidence (a story or single incident) is still only a single incident and it is only as good as the accuracy of the person’s memory
  • Asking people if they “think” it worked…or asking why without looking at the data to back it up are questionable

A good start

  • Prove a correlation between the utilization of the program and a change in productivity, output or profit etc. Prove there is a high correlation (above .6) between the factor and success/ performance/ profit
  • Benchmark to see if research (with data) show it works at other firms. Identify the program characteristics, the measures used, and the methodology to collect the data
  • Compare this year's performance to last years. Look at industry comparisons to show how much performance has changed as a result of the implementation…relative to the change in the rest of the industry
  • Conduct a split sample by implementing the solution in a fraction of the business, then collects metrics across the enterprise to show a performance differential between groups that did and did not receive the solution
  • Implement the program on a trial basis, then see what percentage of managers would pay for it on a fee for service basis
  • Internal experts that have been accurate in similar recent predictions (accurate more than 75% of the time) say it will work
  • Outside expert's (consultants or practitioners) can estimate the impact using a repeatable and proven model

  • Look for academic laboratory studies (or controlled environments) that use data to show it generally works
  • The very top performing firms in our industry use it and have proof (data) that it works
  • Rank of program in a forced ranking survey of managers when they are asked what factors (among miscellaneous programs) contributed most to productivity/ profitability improvement.

Real proof

Start with multi-year baseline performance data (with credible numbers). Then show other variables that may impact any cause and effect relationship and prove that they are being isolated or controlled. Show that the underlying “assumptions” are not changing and that they will not impact the past or current cause and effect relationship. Then use some of these “tools”

  • Triangulation (using three different measurement methodologies each with independent data, that results in the same outcome
  • Use of a pilot (small trial) where a clearly defined subject and control group exists
  • Out-In-Out.  Establish metrics and collect them for a set period of time, then introduce the new program for the same period of time.  One the program is removed, continue to collect the metrics for the specified time period.  The impact should be clearly discernable as a spike when the data is graphed.

Note: Even if it does produce results make sure it does not have unintended consequences (negative “side” effects that were not expected… for example gaining weight after you stop smoking) that might outweigh the benefits from the original program. Continually check the environment to ensure that the basic conditions are not changing.

NOT EVERYTHING CORRELATES TO PROFIT -AND WHEN IT DOES, PROVING IT CAN BE DIFFICULT

When you are trying to prove that any individual tool or strategy actually "works" in business there is a natural tendency to assume that all good things "cause" profit to occur.  It’s a nice notion, but a flawed one!

Profit is a big thing

The number of factors that influence profit is immense.  As a result, it is often hard to prove that any particular tool or strategy had a direct impact on it. This doesn't mean that these tools don't impact profits, just that the way most corporations collect data and attribute performance make it difficult to directly link any particular tool or action to an increase in profit.

For example, research and development may do a phenomenal job at modeling a next generation category killing product only to have that lead eroded by a delay in the migration from design to production to sales and delivery.  This delay could result in a competitor getting to market more quickly, thereby complicating the picture of what could have, would have, and should have been attributed as a correlation between R&D actions and profit.  Another complication is that because so many other unrelated things can increase corporate costs, any impact of a new product may be overridden or mitigated by increased costs from other totally unrelated business areas.

Relationships are hard to prove

Proving that something you do in business (like monitoring the environment effectively) "causes" profits to increase is it difficult task.  That doesn't mean that monitoring the environment doesn't save money or increase revenue it just means the relationship is complex and hard to prove. The fact that something correlates with profits does not mean that the activity caused profits to increase.  There are many things that correlate with each other but that do not guarantee an effect on one another.  A high positive correlation above .6 certainly indicates a high possibility of something that might cause profit but there too many other factors involved to say that any correlation directly proves something cause profit to increase.

An example where correlations fail

One area where a strong correlation tends to exist is executive compensation and corporate performance.  With a few exceptions (Disney for one), increases in executive compensation correlate with increases in corporate performance.  The mere relationship does not prove that increases in executive compensation will cause an increase in profits, or that an increase in profits will cause an increase in executive compensation, although the later is certainly feasible!

The solution

The secret to proving a profit impact starts with breaking down the different impacts or outputs of the tool you're using to increase profits.

For example, monitoring and forecasting the environment can only be proven to increase profits if:

  • The environmental monitoring and forecasting are accurate; and

  • Managers actually use the data from the forecasts to change the way they manage; and

  • Increases in profit correlate to increases in frequency of change attributed to environmental monitoring and forecasting.

Conclusion

Be careful when you try to prove an impact on profit, because many managers are skeptical and the reasons to be so are numerous.  The use of the approaches outlined here can work to reduce suspicion, but only if all of the common errors are avoided.  Remember that correlations don't prove anything by themselves. You must show that (using the example of forecasting) the forecast was first accurate and second that it was actually used by management.  Only after proving the first two does demonstrating a correlation between forecasting and profitability take on any worthwhile meaning.  Proving something impacts profit is a desirable thing to do, it's just a very difficult thing to do!

About Dr John Sullivan

Dr John Sullivan is an internationally known HR thought-leader from the Silicon Valley who specializes in providing bold and high business impact; strategic Talent Management solutions to large corporations.

Check Also

Woman Wearing Blue Top Beside Table Woman Wearing Blue Top Beside Table Interviewing Woman in White Top

Improve Your Interviews With Five Easy Additions (And identify top candidates faster)

Unstructured interviews don’t predict very well, so add a little structure in 5 key areas.  …

Leave a Reply

Your email address will not be published. Required fields are marked *