Sensitivity analysis is cool.
Ok, maybe not “the Fonz” cool. But it is a great way to stress-test a financial model and add robustness and depth to any valuation or statistical analysis.
Bankers do this all the time, and the resulting sensitivity tables end up in many pitchbooks, serving as a senior banker’s best ally; a catch-all safety net for situations where a client’s disagreement with a base case assumption could easily derail a meeting.
But the way sensitivity tables are often presented annoys me. They contain so much potential to provide transparency to analytical recommendations and clarity about the underlying relationships driving them, but regularly fall flat in investment banker pitchbooks.
Here’s a typical example based on a DCF model I hacked together and sensitized to next year’s sales growth and operating margin assumptions.
It’s a grid of numbers. Major snoozefest. Plus it is difficult to present as it takes a significant time investment from the presenter to study and search for a “story”, that even when found, becomes hard to convey to a client with only a banal grid of numbers as a visual aid.
Like most statistical models, weapons, and political offices, data sensitivity tables can be destructive when in the wrong hands. Ok, I’m being hyperbolic, but they can definitely be misleading. And that has serious consequences when your meeting is about a multi-billion dollar acquisition.
Take that sample above again. If it turned up in a pitchbook, it would probably have been created by a junior banker who ran the numbers. The senior banker presenting to it would be several steps removed from the actual analysis. So it’s understandable for the senior banker to say things like, “If we go up one row on margin, the share price drops by $1.62. While if we move to the left one column on sales growth, the share price only drops by $0.55”, and talk about margin because it seems like it's the bigger driver. Seems totally reasonable...
But it's usually the scaling of the two sensitivity ranges driving things, not the actual fields. The above statement only makes sense if “rows” and “columns” are valid units of measurement (or are at least related to reasonable changes in the underlying variables).
The problem is perspective. There is none. This table, a canonical representation of the banker default, was constructed by simply adding and subtracting five percent in increments of one percent to the base case assumptions for both growth and margin. Why? Because they are “nice” numbers, I guess. But it’s not so nice when the client gets excited about a valuation that may have no realistic chance of being realized.
To add perspective, I like to add a few visual enhancements. Not only do they add the proverbial safety net to my tables when they are ultimately presented by a banker who tends to shoot from the hip, but they can turn those dreary numerical matrixes into visualizations you can be proud of.
Calculating and displaying the likelihood of the driver variables actually being at the levels represented by the row and column headers is robust way to add perspective to a data sensitivity table. I prefer to do this visually by adding marginal histograms to each dimension. Of course, the first step is quantifying the probabilities.
To do so, consider the relative volatilities of sales growth and operating margin. For the sake of parameterizing this fictional company valuation analysis, I looked at the historical standard deviation of aggregate S&P 500 year-over-year sales growth and EBITDA margin, about 4% and 1% respectively. Simply assuming a normal distribution around the base case for growth and margin gets me the following graphic.
Clearly, a one-row move and a one-column move does not have the same likelihood of occurring. It’s not an apples to apples comparison; but that’s (sort of) ok because the picture now makes it obvious that that’s the case.
But it’s still silly to show sensitivity ranges including values with almost no chance of realization. Therefore, I steer my team towards using the estimated volatilities to derive likelihood-based ranges for the sensitivity variables, such that a unit movement across the matrix column-wise is equally likely as a one-row move.
Now, the structure of the table is consistent with the cognitive tendencies of the human brain in interpreting a large grid of numbers, and the marginal histograms could even be removed (but they are cool, so let’s leave them).
Ok, deriving estimated probability distributions does require some analytical chops, and the simple-to-use normal distribution is not universally applicable. Probably the quickest and easiest way to add some visual perspective is to slap a heat map over the top of the table. This can be done automatically in Excel using conditional formatting (though I’d be wary of just relying on its default treatment; at the very least consider replacing the min, mid, and max values to be symmetric around the expected case).
All of a sudden the unintelligible number grid is transformed into a graphic that clearly evinces the underlying relationships between the drivers and the response variables.
At just a glance, it is now obvious that valuation is much more sensitive to changing the margin assumption than the growth assumption.
Of course, that insight is only insightful if the respective ranges carry meaning. Therefore, let’s combine the two approaches I’ve discussed to produce this.
Not only does the image itself now sing and dance in the pitchbook, but it is also analytically robust and easy to present.
What are your thoughts on the application (and abuse!) of data sensitivity tables? Email me at firstname.lastname@example.org.
Built by bankers, for bankers, Pellucid enables you to create stunning pitchbook content in a fraction of the time. Visit www.pellucid.com.