Feeds:
Posts
Comments

Michael Mauboussin appeared Friday on Consuelo Mack’s WealthTrack to discuss several of the ideas in his excellent book, Think Twice. Particularly compelling is his story about Triple Crown prospect Big Brown and the advantage of the “outside view” – the statistical one – over the “inside view” – the specific, anecdotal one (excerpted from the book):

June 7, 2008 was a steamy day in New York, but that didn’t stop fans from stuffing the seats at Belmont Park to see Big Brown’s bid for horseracing’s pinnacle, the Triple Crown. The undefeated colt had been impressive. He won the first leg of the Triple Crown, the Kentucky Derby, by 4 ¾ lengths and cruised to a 5 ¼-length win in the second leg, the Preakness.

Oozing with confidence, Big Brown’s trainer, Rick Dutrow, suggested that it was a “foregone conclusion” that his horse would take the prize. Dutrow was emboldened by the horse’s performance, demeanor, and even the good “karma” in the barn. Despite the fact that no horse had won the Triple Crown in over 30 years, the handicappers shared Dutrow’s enthusiasm, putting 3-to-10 odds—almost a 77 percent probability—on his winning.

The fans came out to see Big Brown make history. And make history he did—it just wasn’t what everyone expected. Big Brown was the first Triple Crown contender to finish dead last.

The story of Big Brown is a good example of a common mistake in decision making: psychologists call it using the “inside” instead of the “outside” view.

The inside view considers a problem by focusing on the specific task and by using information that is close at hand. It’s the natural way our minds work. The outside view, by contrast, asks if there are similar situations that can provide a statistical basis for making a decision. The outside view wants to know if others have faced comparable problems, and if so, what happened. It’s an unnatural way to think because it forces people to set aside the information they have gathered.

Dutrow and others were bullish on Big Brown given what they had seen. But the outside view demands to know what happened to horses that had been in Big Brown’s position previously. It turns out that 11 of the 29 had succeeded in their Triple Crown bid in the prior 130 years, about a 40 percent success rate. But scratching the surface of the data revealed an important dichotomy. Before 1950, 8 of the 9 horses that had tried to win the Triple Crown did so. But since 1950, only 3 of 20 succeeded, a measly 15 percent success rate. Further, when compared to the other six recent Triple Crown aspirants, Big Brown was by far the slowest. A careful review of the outside view suggested that Big Brown’s odds were a lot longer than what the tote board suggested. A favorite to win the race? Yes. A better than three-in-four chance? Bad bet.

Mauboussin on WealthTrack:

Hat Tip Abnormal Returns.

 

 

The excellent Empirical Finance Blog has a superb series of posts on an investment strategy called “Profit and Value” (How “Magic” is the Magic Formula? and The Other Side of Value), which Wes describes as the “academic version” of Joel Greenblatt’s “Magic Formula.” (Incidentally, Greenblatt is speaking at the New York Value Investors Congress in October this year. I think seeing Greenblatt alone is worth the price of admission.) The Profit and Value approach is similar to the Magic Formula in that it ranks stocks independently on “value” and “quality,” and then reranks on the combined rankings. The stock with the lowest combined ranking is the most attractive, the stock with the next lowest combined ranking the next most attractive, and so on.

The Profit and Value strategy differs from the Magic Formula strategy in its methods of determining value and quality. Profit and Value uses straight book-to-market to determine value, where the Magic Formula uses EBIT / TEV. And where the Magic Formula uses EBIT / (NPPE +net working capital) to determine quality, Profit and Value uses “Gross Profitability,” a metric described in a fascinating paper by Robert Novy-Marx called “The other side of value” (more on this later).

My prima facie attraction to the Profit and Value strategy was twofold: First, Profit and Value uses book-to-market as the measure of value. I have a long-standing bias for asset-based metrics over income-based ones, and for good reasons. (After examining the performance analysis of Profit and Value, however, I’ve made a permanent switch to another metric that I’ll discuss in more detail later.) Secondly, the back-tested returns to the strategy appear to be considerably higher than those for the Magic Formula. Here’s a chart from Empirical Finance comparing the back-tested returns to each strategy with a yearly rebalancing (click to enlarge):

Profit and Value is the clear slight winner. This is the obvious reason for preferring one strategy over another. It is not, however, the end of the story. There are some problems with the performance of Profit and Value, which I discuss in some detail later. Over the next few weeks I’ll post my full thoughts in a series of posts on the following headings, but, for now, here are the summaries. I welcome any feedback.

Determining “quality” using “gross profitability”

In a 2010 paper called “The other side of value: Good growth and the gross profitability premium,” author Robert Novy-Marks discusses his preference for “gross profitability” over other measures of performance like earnings, or free cash flow. The actual “Gross Profitability” factor Novy-Marx uses is as follows:

Gross Profitability = (Revenues – Cost of Goods Sold) / Total Assets

Novy-Marx’s rationale for preferring gross profitability is compelling. First, it makes sense:

Gross profits is the cleanest accounting measure of true economic profitability. The farther down the income statement one goes, the more polluted profitability measures become, and the less related they are to true economic profitability. For example, a firm that has both lower production costs and higher sales than its competitors is unambiguously more profitable. Even so, it can easily have lower earnings than its competitors. If the firm is quickly increasing its sales though aggressive advertising, or commissions to its sales force, these actions can, even if optimal, reduce its bottom line income below that of its less profitable competitors. Similarly, if the firm spends on research and development to further increase its production advantage, or invests in organizational capital that will help it maintain its competitive advantage, these actions result in lower current earnings. Moreover, capital expenditures that directly increase the scale of the firm’s operations further reduce its free cashflows relative to its competitors. These facts suggest constructing the empirical proxy for productivity using gross profits. Scaling by a book-based measure, instead of a market based measure, avoids hopelessly conflating the productivity proxy with book-to-market. I scale gross profits by book assets, not book equity, because gross profits are not reduced by interest payments and are thus independent of leverage.

Second, it works:

In a horse race between these three measures of productivity, gross profits-to-assets is the clear winner. Gross profits-to-assets has roughly the same power predicting the cross section of expected returns as book-to-market. It completely subsumes the earnings based measure, and has significantly more power than the measure based on free cash flows. Moreover, demeaning this variable dramatically increases its power. Gross profits-to-assets also predicts long run growth in earnings and free crashflow, which may help explain why it is useful in forecasting returns.

I think it’s interesting that gross profits-to-assets is as predictive as book-to-market. I can’t recall any other fundamental performance measure that is predictive at all, let alone as predictive as book-to-market (EBIT / (NPPE +net working capital) is not. Neither are gross margins, ROE, ROA, or five-year earnings gains). There are, however, some obvious problems with gross profitability as a stand alone metric. More later.

White knuckles: Profit and Value performance analysis

While Novy-Marx’s “Gross Profitability” factor seems to be predictive, in combination with the book-to-market value factor the results are very volatile. To the extent that an individual investor can ignore this volatility, the strategy will work very well. As an institutional strategy, however, Profit and Value is a widow-maker. The peak-to-trough drawdown on Profit and Value through the 2007-2009 credit crisis puts any professional money manager following the strategy out of business. Second, the strategy selects highly leveraged stocks, and one needs a bigger set of mangoes than I possess to blindly buy them. The second problem – the preference for highly leveraged stocks – contributes directly to the first problem – big drawdowns in a downturn because investors tend to vomit up highly leveraged stocks as the market falls. Also concerning is the likely performance of Profit and Value in an environment of rising interest rates. Given the negative rates that presently prevail, such an environment seems likely to manifest in the future. I look specifically at the performance of Profit and Value in an environment of rising interest rates.

A better metric than book-to-market

The performance issues with Profit and Value discussed above – the volatility and the preference for highly leveraged balance sheets – are problems with the book-to-market criterion. As Greenblatt points out in his “You can be a stockmarket genius” book, it is partially the leverage embedded in low book-to-market that contributes to the outperformance over the long term. In the short term, however, the leverage can be a problem. There are other problems with cheap book value. As I discussed in The Small Cap Paradox: A problem with LSV’s Contrarian Investment, Extrapolation, and Risk in practice, the low price-to-book decile is very small. James P. O’Shaughnessy discusses this issue in What works on Wall Street:

The glaring problem with this method, when used with the Compustat database, is that it’s virtually impossible to buy the stocks that account for the performance advantage of small capitalization strategies. Table 4-9 illustrates the problem. On December 31, 2003, approximately 8,178 stocks in the active Compustat database had both year-end prices and a number for common shares outstanding. If we sorted the database by decile, each decile would be made up of 818 stocks. As Table 4-9 shows, market capitalization doesn’t get past $150 million until you get to decile 6. The top market capitalization in the fourth decile is $61 million, a number far too small to allow widespread buying of those stocks.

A market capitalization of $2 million – the cheapest and best-performed decile – is uninvestable. This leads O’Shaughnessy to make the point that “micro-cap stock returns are an illusion”:

The only way to achieve these stellar returns is to invest only a few million dollars in over 2,000 stocks. Precious few investors can do that. The stocks are far too small for a mutual fund to buy and far too numerous for an individual to tackle. So there they sit, tantalizingly out of reach of nearly everyone. What’s more, even if you could spread $2,000,000 over 2,000 names, the bid–ask spread would eat you alive.

Even a small investor will struggle to buy enough stock in the 3rd or 4th deciles, which encompass stocks with market capitalizations below $26 million and $61 million respectively. These are not, therefore, institutional-grade strategies. Says O’Shaughnessy:

This presents an interesting paradox: Small-cap mutual funds justify their investments using academic research that shows small stocks outperforming large ones, yet the funds themselves cannot buy the stocks that provide the lion’s share of performance because of a lack of trading liquidity.

A review of the Morningstar Mutual Fund database proves this. On December 31, 2003, the median market capitalization of the 1,215 mutual funds in Morningstar’s all equity, small-cap category was $967 million. That’s right between decile 7 and 8 from the Compustat universe—hardly small.

I spent some time researching alternatives to book-to-market. As much as it pained me to do so, I’ve now abandoned book-to-market as my primary valuation metric. In fact I no longer use it all. I discuss these metrics, and their advantages over book in a later post.

The New York Value Investing Congress on October 17th and 18th this year has an unusually strong line-up. Joel Greenblatt, Bill Ackman, or James Chanos alone are worth the price of admission, but as-yet-unheralded investors like Michael Kao and Guy Gottfried will also likely impress as they did in Pasadena.

Discount: Register by June 29, 2011 and you’ll pay just $2,395.That’s a total savings $2,100 from the $4,495 others will pay later to attend!  Click here to save $2,100 off the usual price of admission by clicking here and using discount code: N11GB2.

Here’s the list of managers presenting:

  • Bill AckmanPershing Square
  • Leon CoopermanOmega Advisors
  • James ChanosKynikos Associates LP
  • Joel GreenblattGotham Capital
  • Guy GottfriedRational Investment Group
  • Michael KaoAkanthos Capital Management
  • Glenn TongueT2 Partners
  • Whitney TilsonT2 Partners

The discount for Greenbackd readers expires in seven days, so take advantage now. Click here to receive the discount.

Greenbackd turned two on 1 December. Over the last two years the site has racked up 526 posts; 2,224 real comments; and 35,898 spam comments (no kidding). One of the best things about living, breathing and writing about deep value for two years is that Greenbackd now contains a great trove of research (from both academia and industry, a distinction that is important for a reason I’ll come to shortly) that, if properly collated, should yield some interesting insights. The only problem is that all of the research is buried in Greenbackd’s 526 posts, most of which are forgettable, regrettable, or both, and some were never published drafts that I held back (I take the Jack Welsh “decile” approach to posting and strangle about 10% of the posts in the crib. I know it’s hard to believe that some articles weren’t good enough for even my low standards, but imagine how much worse it could have been).

I’m going to go back through Greenbackd’s old posts to write a book on systematic deep value. I thought I’d foist my thoughts on some poor publisher somewhere and try to get the book published. Assuming that fails (and I am), watch for my self-published version from Booksurge. I enjoyed reading Paul Graham’s Hackers and Painters, which I understand is a collection of essays previously published on his site. That seems like a sensible approach to me. I’m going to write the chapters as essays for Greenbackd and hope that the poorer ones get some free editing from you fine folks. (I’d also appreciate it if you let me know if you think the whole thesis stinks.)

The central thesis for the book is quantitative (or systematic) value investment (either that or a collection of short stories in the style of Zachary Mason’s The Lost Books of the Odyssey, I haven’t yet decided). I’m primarily interested in three ideas:

1. The best performed metrics for assessing value in the real world. This is interesting to me for two reasons, and they are two sides of the same coin. First, many of the metrics that work in academia and in backtests run into some problems in the real world. In the immortal words of Yogi Berra (or Albert Einstein or Jan J.A. van de Snepscheut):

“In theory there is no difference between theory and practice. In practice there is.”

This boils down to Jameses Montier and O’Shaughnessy versus the world of academia, most notably Lakonishok, Shleifer, and Vishny. For me, this is cognitive dissonance defined. Aswath Damodaran also has some interesting contributions on this topic. Second, some real-world received wisdom in value is no wisdom at all (or at least, that is what the data tell us). I’ve spent a great deal of time backtesting various value metrics. Some work; some make no difference at all; and some destroy value. Some great men of value investing promote these metrics, and that is why they persist. I’m not interested in “good” metrics, which I define as those that beat the market over the long run. I’m interested in optimal metrics; those that consistently beat the other metrics in the real world. I hope and assume that you are too.

2.  The relative merits of quantitative and qualitative strategies in stock selection and portfolio construction. This is a justification for a systematic approach to investment. I believe that this particular event has been run and won by the quants, although my definition of “quant” hews more closely to Montier’s, O’Shaughnessy’s and Philip Tetlock’s definition than Scott Patterson’s. A good process-driven approach to long-only equity investment should and does provide a very good outcome. I think a good process is an austere Tetlockian algorithmic approach, which is in practice a screen that is very difficult to pass, but yields a high proportion of winners in the companies that do pass. Piotroski’s “F-Score” is an excellent example of such an algorithm (even though I don’t personally like it or use it for philosophical rather than practical reasons I’ll discuss at some later date). Assuming a company has a 50/50 chance of passing any one of the Piotroski F-Score algorithm’s 9 binary “gates,” only 1/(2^9) or about 0.2% will pass. When combined with a low decile price-to-book approach, the investable universe is very small indeed. In practice, only a handful pass the screen at any time, but those that do pass perform very well (see, for example, the Piotroski screen on the American Association of Individual Investors website).

3. The methods by which a portfolio can be made “robust.” This is perhaps the most esoteric of the subjects, and also the most interesting. There is a tension between portfolio theory suggested by the efficient markets hypothesis, real-world portfolio construction under the Kelly Criterion. I also think that Nassim Taleb’s thoughts on risk are useful to this discussion. The key to investing, as it is to many things, is to stay in the game. Once your stake is gone, there’s no way to come back (unless your investors don’t know the difference between the geometric and the arithmetic means, in which case just show them the arithmetic mean of your annual returns and party like it’s 1999). For this reason, I spend a lot of time agonizing about the ways that my fund can blow up. I’m not worried about the Rapture, the collapse of western civilisation, CERN’s Large Hadron Collider actually bringing a black hole into existence, or the Mesoamerican Long Count calendar being accurate. (I’m not worried because if any of these things occur we’ll likely have bigger problems than investment returns.) The Austrian economist in me is, however, worried about lots of other things. I think Nassim Taleb’s key insight is that we don’t need to any specific event to be foreseeable to know that we should be prepared for the occurrence of some event. History teaches us that the hundred-year storm rolls around much more frequently than its name would suggest.

While the central thesis is narrow, to do it justice the book will have to canvas a broad range of issues in behavioral and value investing. Some of the topics that are central to this project haven’t yet been written, and so they will be created in the interests of completing the project. The new format for Greenbackd means that I’ll be posting less frequently, but when I do post they’ll be longer chapter essay-style posts, and I’ll be avoiding (for the most part) current events. Thoughts and comments are most welcome.

Changes to Greenbackd

I’m back from my (extended) break. I’ll be making a few changes to the site over the coming weeks, mostly directed to making it easier to find old articles by putting them in a sensible order. I also want to change the architecture of the site to better reflect the work that I do in my fund, so that working on Greenbackd is useful work for my fund, and vice versa. I’m in the middle of setting up a new office, so the posting schedule may be interrupted a little while I wait my connection to the Internet. I’ve got some great new ideas, which I look forward to rolling out over the next few weeks.

Finally, I’m now based in Los Angeles (in Santa Monica). I like meeting people similarly interested in deep value, quant and activism, so if you’re in town and want to say hello, drop me a line. I’ve got a pile of email to get through, so don’t think I’ve ignored you if I don’t respond immediately. You can reach me at greenbackd [at] gmail [dot] com.

I’m on break for the next three weeks. Here are the top posts for the past 12 months:

  1. Mike Burry’s Scion Capital investor letters
  2. Guest post: The short case for Berkshire
  3. Graham’s P/E10 ratio
  4. Tweedy Browne updates What Has Worked In Investing
  5. What Montier’s Painting by Numbers can offer to value investors
  6. The long and short of The St. Joe Company
  7. Seth Klarman on Liquidation Value
  8. Intuition and the quantitative value investor
  9. The long and short of Berkshire Hathaway
  10. Guest post update: Valuation for Berkshire Hathaway

Good investing.

The Fall 2010 edition of the Graham and Doddsville Newsletter, Columbia Business School‘s student-led investment newsletter co-sponsored by the Heilbrunn Center for Graham & Dodd Investing and the Columbia Investment Management Association, has a fascinating interview with Donald G. Smith. Smith, who volunteered for Benjamin Graham at UCLA, concentrates on the bottom decile of price to tangible book stocks and has compounded at 15.3% over 30 years:

G&D: Briefly describe the history of your firm and how you got started?

DS: Donald Smith & Co. was founded in 1980 and now has $3.6 billion under management. Over 30 years since inception our compounded annualized return is 15.3%. Over the last 10 years our annualized return is 12.1% versus −0.4% for the S&P 500.

Our investment philosophy goes back to when I was going to UCLA Law School and Benjamin Graham was teaching in the UCLA Business School. In one of his lectures he discussed a Drexel Firestone study which analyzed the performance of a portfolio of the lowest P/E third of the Dow Jones (which was the beginning of ―Dogs of the Dow 30). Graham wanted to update that study but he didn‘t have access to a database in those days, so he asked for volunteers to manually calculate the data. I was curious about this whole approach so I decided to volunteer. There was no question that this approach beat the market. However, doing the analysis, especially by hand, you could see some of the flaws in the P/E based approach. Based on the system you would buy Chrysler every time the earnings boomed and it was selling at only a 5x P/E, but the next year or two they would go into a down cycle, the P/E would expand and you were forced to sell it. So in effect, you were often buying high and selling low. So it dawned on me that P/E and earnings were too volatile to base an investment philosophy on. That‘s why I started playing with book value to develop a better investment approach based on a more stable metric.

G&D: There are plenty of studies suggesting that the lowest price to book stocks outperform. However, only 1/10 of 1% of all money managers focus on the lowest decile of price to book stocks. Why do you think that‘s so, and how do people ignore all of this evidence?

DS: They haven‘t totally ignored it. There are periods of time when quant funds, in particular, use this strategy. However a lot of the purely quant funds buying low price to book stocks have blown up, as was the case in the summer of 2007. Now not as many funds are using the approach. Low price to book stocks tend to be out-of-favor companies. Often their earnings are really depressed, and when earnings are going down and stock prices are going down, it‘s a tough sell.

G&D: Would you mind talking about how the composition of that bottom decile has changed over time? Is it typically composed of firms in particular out of favor industries or companies dealing with specific issues unique to them?

DS: The bulk is companies with specific issues unique to them, but often there is a sector theme. Back in the early 1980‘s small stocks were all the rage and big slow-growing companies were very depressed. At that time we loaded up on a lot of these large companies. Then the KKR‘s of the world started buying them because of their stable cash flow and the stocks went up. About six years ago, a lot of the energy-related stocks were very cheap. We owned oil shipping, oil services and coal companies trading below book and liquidation value. When oil went up they became the darlings of Wall Street. Over the years we have consistently owned electric utilities because there always seem to be stocks that are temporarily depressed because of a bad rate decision by the public service commission. Also, cyclicals have been a staple for us over the years because, by definition, they go up and down a lot which gives us buying opportunities. We‘ve been in and out of the hotel group, homebuilders, airlines, and tech stocks.

Performance of the low-price-to-tangible book value:

Read the Graham and Doddsville newsletter Fall 2010 (.pdf).

Hat tip George.

For a period from late 2008 through mid 2009 the GSI Group (PINK:LASR) was prima facie the cheapest stock on my net net screen, but I couldn’t pull the trigger because it was delinquent a few quarterly filings. The company entered Chapter 11 due to the technical default of not filing financial statements and is now an extremely interesting prospect post reorganization. The superb Above Average Odds Investing blog has a guest post from Ben Rosenzweig, an analyst at Privet Fund Management, titled The GSI Group (LASR.PK) – Another Low-Risk, High-Return Post Reorg Equity w/ Substantial Near-Term Catalyst(s), which really says it all. Here’s the summary:

Thesis Summary: Privet Fund LP is long GSIGQ common stock. Our post-emergence price target is $5.00 per common share, an internal rate of return of 123% based on closing price of $2.70 and right to purchase .99 shares for every 1 share currently owned at a price of $1.80 per share. The market has failed to fully price in the impact of the Plan of Reorganization that was confirmed on Thursday, May 27, 2010.

We believe GSI is an attractive investment opportunity for the following reasons:

  • Due to the efforts of the equity committee throughout the bankruptcy process, the pre-emergence equity holders will be able to maintain an 87% ownership in the post-emergence company, up from an initial distribution of 18.6% in the first Plan of Reorganization
  • The end markets for the Company’s precision technology and semiconductor products are coming out of the trough of a cycle and, as a result, GSI’s bookings have been increasing at an exponential rate
  • The purging of the previous management regime opens the door for an experienced operator to run the Company much more efficiently and make strategic decisions with a view toward enhancing the value of the enterprise
  • The significant reduction in debt gives management the needed flexibility to focus solely on improving operations. This should result in significant fixed cost leverage going forward as evidenced by the Q1 2010 EBITDA margin of 14%, a figure that previous management suggested was not achievable until the end of 2011
  • The current market valuation, which includes the right to buy .99 shares at $1.80 per share, implies a 2010 sales figure and discounted cash flow valuation that is simply not possible even if the Company’s financial performance does not follow through on the radical improvements that have been shown during the past two quarters

Read the post in full.

James P. O’Shaughnessy’s What works on Wall Street is one of my favorite books on investing. The thing that I like most about the book is O’Shaughnessy use of data to slaughter several sacred value investing cows, one of which I mentioned yesterday (see The Small Cap Paradox: A problem with LSV’s Contrarian Investment, Extrapolation, and Risk in practice).

Another sacred cow put to the sword in the book is the use of five-year earnings-per-share growth to improve the returns from a price-to-earnings screen. O’Shaughnessy describes the issue in this way:

Some analysts believe that a one-year change in earnings is meaningless, and we would be better off focusing on five-year growth rates. This, they argue, is enough time to separate the one-trick pony from the true thoroughbred.

So what does the data say?

Unfortunately, five years of big earnings gains doesn’t help us pick thoroughbreds either. Starting on December 31, 1954 (we need five years of data to compute the compound five-year earnings growth rate), $10,000 invested in the 50 stocks from the All Stocks universe with the highest five-year compound earnings-per-share growth rates grew to $1,287,685 by the end of 2003, a compound return of 10.42 percent (Table 12-1). A $10,000 investment in the All Stocks universe on December 31, 1954 was worth $3,519,152 on December 31, 2003, a return of 12.71 percent a year.

O’Shaughnessy interprets the data thus:

Much like the 50 stocks with the highest one-year earnings gains, investors get dazzled by high five-year earnings growth rates and bid prices to unsustainable levels. When the future earnings are lower than expected, investors punish their former darlings and prices swoon.

The evidence shows that it is a mistake to get overly excited by big earnings gains.

Five-year growth rates are clearly mean reverting, and I love to see an intuitive strategy beaten by a little reversion to the mean.

Yesterday’s post on LSV Asset Management’s performance reminded me of the practical difficulties of implementing many theoretically well-performed investment strategies. LSV Asset Management is an outgrowth of the research conducted by Josef Lakonishok, Andrei Shleifer, and Robert Vishny. They are perhaps best known for the Contrarian Investment, Extrapolation, and Risk paper, which, among other things, analyzed low price-to-book value stocks in deciles (an approach possibly suggested by Roger Ibbotson’s study Decile Portfolios of the New York Stock Exchange, 1967 – 1984). They found that low price-to-book value stocks out perform, and in rank order (the cheapest decile outperforms the next cheapest decile and so on). The problem with the approach is that the lowest price-to-book value deciles – that is, the cheapest and therefore best performed deciles – are uninvestable.

In an earlier post, Walking the talk: Applying back-tested investment strategies in practice, I noted that Aswath Damodaran, a Professor of Finance at the Stern School of Business, has a thesis that “transaction costs” – broadly defined to include brokerage commissions, spread and the “price impact” of trading – foil in the real world investment strategies that beat the market in back-tests. Damodaran made the point that even well-researched, back-tested, market-beating strategies underperform in practice:

Most of these beat-the-market approaches, and especially the well researched ones, are backed up by evidence from back testing, where the approach is tried on historical data and found to deliver “excess returns”. Ergo, a money making strategy is born.. books are written.. mutual funds are created.

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index. That number has remained surprisingly stable over the last four decades and has persisted through bull and bear markets. Worse, this under performance cannot be attributed to “bad” portfolio mangers who drag the average down, since there is very little consistency in performance. Winners this year are just as likely to be losers next year…

Damodaran’s solution for why some market-beating strategies that work on paper fail in the real world is transaction costs. But it’s not the only reason. Some strategies are simply impossible to implement, and LSV’s low decile price-to-book value strategy is one such strategy.

James P. O’Shaughnessy’s What works on Wall Street is one of my favorite books on investing. In the book, O’Shaughnessy suggests another problem with the real-world application of LSV’s decile approach:

Most academic studies of market capitalization sort stocks by deciles (10 percent) and review how an investment in each fares over time. The studies are nearly unanimous in their findings that small stocks (those in the lowest four deciles) do significantly better than large ones. We too have found tremendous returns from tiny stocks.

So far so good. So what’s the problem?

The glaring problem with this method, when used with the Compustat database, is that it’s virtually impossible to buy the stocks that account for the performance advantage of small capitalization strategies. Table 4-9 illustrates the problem. On December 31, 2003, approximately 8,178 stocks in the active Compustat database had both year-end prices and a number for common shares outstanding. If we sorted the database by decile, each decile would be made up of 818 stocks. As Table 4-9 shows, market capitalization doesn’t get past $150 million until you get to decile 6. The top market capitalization in the fourth decile is $61 million, a number far too small to allow widespread buying of those stocks.

A market capitalization of $2 million – the cheapest and best-performed decile – is uninvestable. This leads O’Shaughnessy to make the point that “micro-cap stock returns are an illusion”:

The only way to achieve these stellar returns is to invest only a few million dollars in over 2,000 stocks. Precious few investors can do that. The stocks are far too small for a mutual fund to buy and far too numerous for an individual to tackle. So there they sit, tantalizingly out of reach of nearly everyone. What’s more, even if you could spread $2,000,000 over 2,000 names, the bid–ask spread would eat you alive.

Even a small investor will struggle to buy enough stock in the 3rd or 4th deciles, which encompass stocks with market capitalizations below $26 million and $61 million respectively. These are not, therefore, institutional-grade strategies. Says O’Shaughnessy:

This presents an interesting paradox: Small-cap mutual funds justify their investments using academic research that shows small stocks outperforming large ones, yet the funds themselves cannot buy the stocks that provide the lion’s share of performance because of a lack of trading liquidity.

A review of the Morningstar Mutual Fund database proves this. On December 31, 2003, the median market capitalization of the 1,215 mutual funds in Morningstar’s all equity, small-cap category was $967 million. That’s right between decile 7 and 8 from the Compustat universe—hardly small.

The good news is, there are other strategies that do work.