For the last 15 years of my time on Wall Street, I ran what many banks call either the Financial Strategies Group or the Strategic Finance Group at three bulge bracket banks. Part of this involved providing quantitative corporate finance advisory services to clients. Frequently we worked with coverage bankers to help with specific projects, say for instance an optimal capital structure review. All my team would need from the coverage banker was a list of comparable comps, and from that, we could pull together an analysis. For our part—the development of analysis—we could produce a 50-page pitchbook relatively fast (this was the early days of Pellucid-like productivity, so hours not minutes). But due to the siloed nature of the investment banking industry, it would take much longer to pull together the comps list; sometimes this would process would stretch for days.
That seems weird.
If you’ve never had to pull together a comps list, you may think it’s an easy thing to do. But it’s surprisingly difficult, and often requires beginning at square one for each analysis.
When creating a list, as a starting point it makes sense to look at the comps provided by market data vendors. But, and quite rightly, these are often skeptically viewed. Sometimes the client will have its own list, but often this will need some tweaking. Coming up with a comps list is much more challenging than industry outsiders would imagine and largely because each time a comps list is created, you’re recreating the wheel and not tapping into the knowledge of your colleagues.
Unless you’ve been living under a rock, you’ll be familiar with the concept of crowdsourcing. Jeff Howe introduced the portmanteau in a 2006 Wired article, but the concept has been around for thousands of years. In a companion blog, Howe described crowdsourcing as:
“Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively) but is also undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.”
For investment banks, crowdsourcing is a much more relevant concept than you would imagine.
The diagram below depicts all of the employees at an investment bank. Each dot represents a department, and the circle represents the bank. It looks like everyone is working together, right?
But, in reality, banks are structured as silos.
Given this, a bank isn’t functioning as a single entity any longer. Instead, it’s operating as a series of smaller firms. This prevents knowledge from traveling across groups.
The rationale for siloing is solid. It’s to protect clients and confidentiality, but there is value in reconfiguring how silos are established to allow the cross-pollination of ideas and better sharing of knowledge among peers. Your peers are smart, and their interests are aligned with yours, unlocking the knowledge that exists in those silos, could effectively magnify what is known by a factor of 10-100x for most banks.
With this, creating a comps list could be transformed from a multi-day task to small errand. Unleashing the millions of data points that represent all of their comparables knowledge could save thousands of hours.
What’s needed is a system that examines every set of comparables ever used and the intra-relationships across different companies. This information could then be compressed into a list, ready for use.
To generate a list of five comps, this crowdsourced-based system could produce a list of the five most commonly used comparable companies used by your bank for your primary company. If you need more or fewer comps, it would require a simple changing of the scale. Add in the ability to select a time period of when the comps were provided, and you have a robust system that even accounts for large corporate actions. And, if you personalize this, that knowledge can be stored and used to create future comps list, creating a new proprietary asset for your firm: an ever-evolving comps database.
There are opportunities to crowdsourced knowledge beyond just comps, but an important thing to build into any crowdsourced model is a “need to know” rule. For instance, scrubbed numbers are ripe for crowdsourcing, but these can’t be broadly shared in case they contain inside information, so controlling access is vital (at Pellucid, we have an opt-in model for scrubbed numbers to address this). Knowing the intricacies of the data you're working with is critical, but once developed, the potential of crowdsourcing as a resource-saving tool is enormous.
Let me know your thoughts on what else could be crowdsourced at email@example.com.
Explore Pellucid’s content, created specifically for pitchbooks and financial analysis. Request a demo.