Archive for the ‘Quantitative investment’ Category

Greenbackd was honored to be one of the bloggers asked to participate in Abnormal Returns “Finance Blogger Wisdom” series. Tadas asked a range of questions and will publish them on Abnormal Returns over the course of the week. The first question is, “If you had a son or daughter just beginning to invest, what would you tell them to do to best prepare themselves for a lifetime of good investing?

I answered as follows:

Inspired by Michael Pollan’s edict for healthy eating (“Eat food. Not too much. Mostly plants.”), for good investing I’d propose “Buy value. Diversify globally. Stay invested.”

I feel that I should justify the answer a little in the context of the “What to do in sideways markets” post about Vitaliy Katsenelson‘s excellent book “The Little Book of Sideways Markets“. To recap, Vitaliy’s thesis is that equity markets are characterized by periods of valuation expansion (“bull market”) and contraction (“bear market” or “sideways market”). A sideways market is the result of earnings increasing while valuation drops. Historically, they are common:

We’ve clearly been in a sideways market for all of the 2000s, and yet the CAPE presently stands at 21.22. CAPE has in the past typically fallen to a single-digit low following a cyclical peak. The last time a sideways market traded on a CAPE of ~21 (1969) it took ~13 years to bottom (1982). The all-time peak US CAPE of 44.2 occurred in December 1999, all-time low US CAPE of 4.78 occurred in December 1920. The most recent CAPE low of 6.6 occurred in August 1982. I’m fully prepared for another 13 years of sideways market (although, to be fair, I don’t really care what the market does).

If you subscribe to Vitaliy’s thesis – as I do – that the sideways market will persist until we reach a single-digit CAPE, then it might seem odd to suggest staying fully invested. In my defence, I make the following two points:

First, I am assuming a relatively unsophisticated beginner investor.

Second, this chart:

Source: Turnkey Analyst Backtester.

A simple, quantitative, “cheap but good” value strategy has delivered reasonable returns over the last decade in a flat market. I don’t think these returns are worth writing home about, but if my kids can dollar cost average into an ~11-12 percent per year in a flat market, they’ll do fine over the long run.

The other responses are outstanding. See them here.

Read Full Post »

The only fair fight in finance: Joel Greenblatt versus himself. In this instance, it’s the 250 best special situations investors in the US on Joel’s special situations site valueinvestorsclub.com versus his Magic Formula.

Wes Gray and crew at Empiritrage have pumped out some great papers over the last few years, and their Man vs. Machine: Quantitative Value or Fundamental Value? is no exception. Wes et al have set up an experiment comparing the performance of the stocks selected by the investors on the VIC – arguably the best 250 special situation investors in the US – and the top decile of stocks selected by the Magic Formula over the period March 1, 2000 through to the end of last year. The stocks had to have a minimum market capitalization of $500 million, were equally weighted and held for 12 months after selection.

The good news for the stocks pickers is that the VIC members handed the Magic Formula its head:

There’s slightly less advantage to the VIC members on a risk/reward basis, but man still comes out ahead:

Gray et al note that the Man-versus-Magic Formula question is a trade-off.

  • Man brings more return, but more risk; Machine has lower return, but less risk.
  • The risk/reward tradeoff is favorable for Man, in other words, the Sharpe ratio is higher for Man relative to Machine.
  • Value strategies dominate regardless of who implements the strategy.

Read the rest of the paper here.

Read Full Post »

The rationale for a value-weighted index can be paraphrased as follows:

  • Most investors, pro’s included, can’t beat the index. Therefore, buying an index fund is better than messing it up yourself or getting an active manager to mess it up for you.
  • If you’re going to buy an index, you might as well buy the best one. An index based on the market capitalization-weighted S&P500 will be handily beaten by an equal-weighted index, which will be handily beaten by a fundamentally weighted index, which is in turn handily beaten by a “value-weighted index,” which is what Greenblatt calls his “Magic Formula-weighted index.”

According to Greenblatt, the second point looks like this:

Market Capitalization-Weight < Equal Weight < Fundamental Weight < “Value Weight” (Greenblatt’s Magic Formula Weight)

In chart form (from Joel Greenblatt’s Value Weighted Index):

There is an argument to be made that the second point could be as follows:

Market Capitalization-Weight < Equal Weight < “Value Weight” (Greenblatt’s Magic Formula Weight) <= Fundamental Weight

Fundamental Weight could potentially deliver better returns than “Value” Weight, if we select the correct fundamentals.

The classic paper on fundamental indexation is the 2004 paper “Fundamental Indexation” by Robert Arnott (Chairman of Research Affiliates), Jason Hsu and Philip Moore. The paper is very readable. Arnott et al argue that it should be possible to construct stock market indexes that are more efficient than those based on market capitalization. From the abstract:

In this paper, we examine a series of equity market indexes weighted by fundamental metrics of size, rather than market capitalization. We find that these indexes deliver consistent and significant benefits relative to standard capitalization-weighted market indexes. These indexes exhibit similar beta, liquidity and capacity compared to capitalization-weighted equity market indexes and have very low turnover. They show annual returns that are on average 213 basis points higher than equivalent capitalization-weighted indexes over the 42 years of the study. They contain most of the same stocks found in the traditional equity market indexes, but the weights of the stocks in these new indexes differ materially from their weights in capitalization-weighted indexes. Selection of companies and their weights in the indexes are based on simple measures of firm size such as book value, income, gross dividends, revenues, sales, and total company employment.

Arnott et al seek to create alternative indices that as efficient “as the usual capitalization-weighted market indexes, while retaining the many benefits of capitalization- weighting for the passive investor,” which include, for example, lower trading costs and fees than active management.

Interestingly, they find a high degree of correlation between market capitalization-weighted indices and fundamental indexation:

We find most alternative measures of firm size such as book value, income, sales, revenues, gross dividends or total employment are highly correlated with capitalization and liquidity, which means these Fundamental Indexes are also primarily concentrated in the large capitalization stocks, preserving the liquidity and capacity benefits of traditional capitalization- weighted indexes. In addition, as compared with conventional capitalization-weighted indexes, these Fundamental Indexes typically have substantially identical volatilities, and CAPM betas and correlations exceeding 0.95. The market characteristics that investors have traditionally gained exposure to, through holding capitalization-weighted market indexes, are equally accessible through these Fundamental Indexes.

The main problem with the equal-weight indexes we looked at last week is the high turnover to maintain the equal weighting. Fundamental indexation could potentially suffer from the same problem:

Maintaining low turnover is the most challenging aspect in the construction of Fundamental Indexes. In addition to the usual reconstitution, a certain amount of rebalancing is also needed for the Fundamental Indexes. If a stock price goes up 10%, its capitalization also goes up 10%. The weight of that stock in the Fundamental Index will at some interval need to be rebalanced to its its Fundamental weight in that index. If the rebalancing periods are too long, the difference between the policy weights and actual portfolio weights become so large that some of the suspected negative attributes associated with capitalization weighting may be reintroduced.

Arnott et al construct their indices as follows:

[We] rank all companies by each metric, then select the 1000 largest. Each of these 1000 largest is included in the index, at its relative metric weight, to create the Fundamental Index for that metric. The measures of firm size we use in this study are:

• book value (designated by the shorthand “book” later in this paper),

• trailing five-year average operating income (“income”),

• trailing five-year average revenues (“revenue”),

• trailing five-year average sales (“sales”),

• trailing five-year average gross dividend (“dividend”),

• total employment (“employment”),

We also examine a composite, equally weighting four of the above fundamental metrics of size (“composite”). This composite excludes the total employment because that is not always available, and sales because sales and revenues are so very similar. The four metrics used in the composite are widely available in most countries, so that the Composite Fundamental Index could easily be applied internationally, globally and even in the emerging markets.

The index is rebalanced on the last trading day of each year, using the end of day prices. We hold this portfolio until the end of the next year, at which point we use the most recent company financial information to calculate the following year’s index weights.

We rebalance the index only once a year, on the last trading day of the year, for two reasons. First, the financial data available through Compustat are available only on an annual basis in the earliest years of our study. Second, when we try monthly, quarterly, and semi-annual rebalancing, we increase index turnover but find no appreciable return advantage over annual rebalancing.

Performance of the fundamental indices

The returns produced by the fundamental indices are, on average, 1.91 percent higher than the S&P500. The best of the fundamental indexes outpaces the Reference Capitalization index by 2.50% per annum:

Surprisingly, the composite rivals the performance of the average, even though it excludes two of the three best Fundamental Indexes! Most of these indexes outpace the equal-weighted index of the top 1000 by capitalization, with lower risk, lower beta.

Note that the “Reference Capitalization index” is a 1000-stock capitalization-weighted equity market index that bears close resemblance to the highly regarded Russell 1000, although it is not identical. The construction of the Reference Capitalization index allows Arnott et al to “make direct comparisons with the Fundamental Indexes uncomplicated by questions of float, market impact, subjective selections, and so forth.”


In the “value-added” chart Arnott et al examine the correlation of the value added for the various indexes, net of the return for the Reference Capitalization index, with an array of asset classes.

Here, we find differences that are more interesting, though often lacking in statistical significance. The S&P 500 would seem to outpace the Reference Capitalization index when the stock market is rising, the broad US bond market is rising (i.e., interest rates are falling), and high-yield bonds, emerging markets bonds and REITS are performing badly. The Fundamental Indexes have mostly the opposite characteristics, performing best when US and non-US stocks are falling and REITS are rising. Curiously, they mostly perform well when High Yield bonds are rising but Emerging Markets bonds are falling. Also, they tend to perform well when TIPS are rising (i.e., real interest rates are falling). Most of these results are unsurprising; but, apart from the S&P and REIT correlations, most are also not statistically significant.


Arnott et al make some excellent points in the paper:

We believe the performance of these Fundamental Indexes are largely free of data mining. Our selection of size metrics were intuitive and were not selected ex post, based upon results. We use no subjective stock selection or weighting decisions in their construction, and the portfolios are not fine-tuned in any way. Even so, we acknowledge that our research may be subject to the following – largely unavoidable – criticisms:

we lived through the period covered by this research (1/1962-12/2003); we experienced bubble periods where cap-weighting caused severe destruction of investor wealth, contributing to our concern about the efficacy of capitalization-weighted indexation (the “nifty fifty” of 1971-72, the bubble of 1999-2000) and

• our Fundamental metrics of size, such as book value, revenues, smoothed earnings, total employment, and so forth, all implicitly introduce a value bias, amply documented as possible market inefficiencies or as priced risk factors. (Reciprocally, it can be argued that capitalization-weighted indexes have a growth bias, whereas the Fundamental Indexes do not.)

They also make some interesting commentary about global diversification using fundamental indexation:

For international and global portfolios, it’s noteworthy that Fundamental Indexing introduces a more stable country allocation than capitalization weighting. Just as the Fundamental Indexes smooth the movement of sector and industry allocations to mirror the evolution of each sector or industry’s scale in the overall economy, a global Fundamental Indexes index will smooth the movement of country allocations, mirroring the relative size of each country’s scale in the global economy. In other words, a global Fundamental Indexes index should offer the same advantages as GDP-weighted global indexing, with the same rebalancing “alpha” enjoyed by GDP-weighting. We would argue that the “alpha” from GDP-weighting in international portfolios is perhaps attributable to the elimination of the same capitalization-weighted return drag (from overweighting the overvalued countries and underweighting the undervalued countries) as we observe in the US indexes. This is the subject of some current research that we hope to publish in the coming year.

And finally:

This method outpaces most active managers, by a much greater margin and with more consistency, than conventional capitalization-weighted indexes. This need not argue against active management; it only suggests that active managers have perhaps been using the wrong “market portfolio” as a starting point, making active management “bets” relative to the wrong index. If an active management process can add value, then it should perform far better if it makes active bets against one of these Fundamental Indexes than against capitalization-weighted indexes.

Read Full Post »

Joel Greenblatt’s rationale for a value-weighted index can be paraphrased as follows:

  • Most investors, pro’s included, can’t beat the index. Therefore, buying an index fund is better than messing it up yourself or getting an active manager to mess it up for you.
  • If you’re going to buy an index, you might as well buy the best one. An index based on the market capitalization-weighted S&P500 will be handily beaten by an equal-weighted index, which will be handily beaten by a fundamentally weighted index, which is in turn handily beaten by a “value-weighted index,” which is what Greenblatt calls his “Magic Formula-weighted index.”

Yesterday we examined the first point. Today let’s examine the second.

Market Capitalization Weight < Equal Weight < Fundamental Weight < “Value Weight” (Greenblatt’s Magic Formula Weight)

I think this chart is compelling:

It shows the CAGRs for a variety of indices over the 20 years to December 31, 2010. The first thing to note is that the equal weight index – represented by the &P500 Equal Weight TR – has a huge advantage over the market capitalization weighted S&P500 TR. Greenblatt says:

Over time, traditional market-cap weighted indexes such as the S&P 500 and the Russell 1000 have been shown to outperform most active managers. However, market cap weighted indexes suffer from a systematic flaw. The problem is that market-cap weighted indexes increase the amount they own of a particular company as that company’s stock price increases. As a company’s stock falls, its market capitalization falls and a market cap-weighted index will automatically own less of that company. However, over the short term, stock prices can often be affected by emotion. A market index that bases its investment weights solely on market capitalization (and therefore market price) will systematically invest too much in stocks when they are overpriced and too little in stocks when they are priced at bargain levels. (In the internet bubble, for example, as internet stocks went up in price, market cap-weighted indexes became too heavily concentrated in this overpriced sector and too underweighted in the stocks of established companies in less exciting industries.) This systematic flaw appears to cost market-cap weighted indexes approximately 2% per year in return over long periods.

The equal weight index corrects this systematic flaw to a degree (the small correction is still worth 2.7 percent per year in additional performance). Greenblatt describes it as randomizing the errors made by the market capitalization weighted index:

One way to avoid the problem of buying too much of overpriced stocks and too little of bargain stocks in a market-cap weighted index is to create an index that weights each stock in the index equally. An equally-weighted index will still own too much of overpriced stocks and too little of bargain-priced stocks, but in other cases, it will own more of bargain stocks and less of overpriced stocks. Since stocks in the index aren’t affected by price, errors will be random and average out over time. For this reason, equally weighted indexes should add back the approximately 2% per year lost to the inefficiencies of market-cap weighting.

While the errors are randomized in the equal weight index, they are still systematic – it still owns too much of the expensive stocks and too little of the cheap ones. Fundamental weighting corrects this error (again to a small degree). Fundamentally-weighted indexes weight companies based on their economic size using price ratios such as sales, book value, cash flow and dividends. The surprising thing is that this change is worth only 0.4 percent per year over equal weighting (still 3.1 percent per year over market capitalization weighting).

Similar to equally-weighted indexes, company weights are not affected by market price and therefore pricing errors are also random. By correcting for the systematic errors caused by weighting solely by market-cap, as tested over the last 40+ years, fundamentally-weighted indexes can also add back the approximately 2% lost each year due to the inefficiencies of market-cap weighting (with the last 20 years adding back even more!).

The Magic Formula / “value” weighted index has a huge advantage over fundamental weighting (+3.9 percent per year), and is a massive improvement over the market capitalization index (+7 percent per year). Greenblatt describes it as follows:

On the other hand, value-weighted indexes seek not only to avoid the losses due to the inefficiencies of market-cap weighting, but to add performance by buying more of stocks when they are available at bargain prices. Value-weighted indexes are continually rebalanced to weight most heavily those stocks that are priced at the largest discount to various measures of value. Over time, these indexes can significantly outperform active managers, market cap-weighted indexes, equally-weighted indexes, and fundamentally-weighted indexes.

I like Greenblatt’s approach. I’ve got two small criticisms:

1. I’m not sure that his Magic Formula weighting is genuine “value” weighting. Contrast Greenblatt’s approach with Dylan Grice’s “Intrinsic Value to Price” or “IVP” approach, which is a modified residual income approach, the details of which I’ll discuss in a later post. Grice’s IVP is a true intrinsic value calculation. He explains his approach in a way reminiscent of Buffett’s approach:

[How] is intrinsic value estimated? To answer, think first about how much you should pay for a going concern. The simplest such example would be that of a bank account containing $100, earning 5% per year interest. This asset is highly liquid. It also provides a stable income. And if I reinvest that income forever, it provides stable growth too. What’s it worth?

Let’s assume my desired return is 5%. The bank account is worth only its book value of $100 (the annual interest payment of $5 divided by my desired return of 5%). It may be liquid, stable and even growing, but since it’s not generating any value over and above my required return, it deserves no premium to book value.

This focus on an asset’s earnings power and, in particular, the ability of assets to earn returns in excess of desired returns is the essence of my intrinsic valuation, which is based on Steven Penman’s residual income model.1 The basic idea is that if a company is not earning a return in excess of our desired return, that company, like the bank account example above, deserves no premium to book value.

And it seems to work:

Grice actually calculates IVP while Greenblatt does not. Does that actually matter? Probably not. Even if it’s not what I think the average person understands real “value” weighting to be, Greenblatt’s approach seems to work. Why quibble over semantics?

2. As I’ve discussed before, Greenblatt’s Magic Formula return owes a great deal to his selection of EBIT/TEV as the price limb of his model. EBIT/TEV has been very well performed historically. If we were to substitute EBIT/TEV for the P/B, P/E, price-to-dividends, P/S, P/whatever, we’d have seen slightly better performance than the Magic Formula provided, but you might have been out of the game somewhere between 1997 to 2001.

Read Full Post »

Last week I looked at James Montier’s 2006 paper The Little Note That Beats The Market and his view that investors would struggle to implement the Magic Formula strategy for behavioral reasons, a view borne out by Greenblatt’s own research. This is not a criticism of the strategy, which is tractable and implementable, but an observation on how pernicious our cognitive biases are.

Greenblatt found that a compilation of all the “professionally managed” – read “systematic, automatic (hydromatic)” – accounts earned 84.1 percent over two years against the S&P 500 (up 62.7 percent). A compilation of “self-managed” accounts (the humans) over the same period showed a cumulative return of 59.4 percent, losing to the market by 20 percent, and to the machines by almost 25 percent. So the humans took this unmessupable system and messed it up. As predicted by Montier and Greenblatt.


Greenblatt, perhaps dismayed at the fact that he dragged the horses all the way to the water to find they still wouldn’t drink, has a new idea: value-weighted indexing (not to be confused with the academic term for market capitalization-weighting, which is, confusingly, also called value weighting).

I know from speaking to some of you that this is not a particularly popular idea, but I like it. Here’s Greenblatt’s rationale, paraphrased:

  • Most investors, pro’s included, can’t beat the index. Therefore, buying an index fund is better than messing it up yourself or getting an active manager to mess it up for you.
  • If you’re going to buy an index, you might as well buy the best one. An index based on the market capitalization-weighted S&P500 will be handily beaten by an equal-weighted index, which will be handily beaten by a fundamentally weighted index, which is in turn handily beaten by a “value-weighted index,” which is what Greenblatt calls his “Magic Formula-weighted index.”

I like the logic. I also think the data on the last point are persuasive. In chart form, the data on that last point look like this:

The value weighted index knocked out a CAGR of 16.1 percent per year over the last 20 years. Not bad.

Greenblatt explains his rationale in some depth in his latest book The Big Secret. The book has taken some heavy criticism on Amazon – average review is 3.2 out of 5 as of now – most of which I think is unwarranted (for example, “Like many others here, I do not exactly understand the reason for this book’s existence.”).

I’m going to take a close look at the value-weighted index this week.

Read Full Post »

The excellent Empirical Finance Blog has a superb series of posts on an investment strategy called “Profit and Value” (How “Magic” is the Magic Formula? and The Other Side of Value), which Wes describes as the “academic version” of Joel Greenblatt’s “Magic Formula.” (Incidentally, Greenblatt is speaking at the New York Value Investors Congress in October this year. I think seeing Greenblatt alone is worth the price of admission.) The Profit and Value approach is similar to the Magic Formula in that it ranks stocks independently on “value” and “quality,” and then reranks on the combined rankings. The stock with the lowest combined ranking is the most attractive, the stock with the next lowest combined ranking the next most attractive, and so on.

The Profit and Value strategy differs from the Magic Formula strategy in its methods of determining value and quality. Profit and Value uses straight book-to-market to determine value, where the Magic Formula uses EBIT / TEV. And where the Magic Formula uses EBIT / (NPPE +net working capital) to determine quality, Profit and Value uses “Gross Profitability,” a metric described in a fascinating paper by Robert Novy-Marx called “The other side of value” (more on this later).

My prima facie attraction to the Profit and Value strategy was twofold: First, Profit and Value uses book-to-market as the measure of value. I have a long-standing bias for asset-based metrics over income-based ones, and for good reasons. (After examining the performance analysis of Profit and Value, however, I’ve made a permanent switch to another metric that I’ll discuss in more detail later.) Secondly, the back-tested returns to the strategy appear to be considerably higher than those for the Magic Formula. Here’s a chart from Empirical Finance comparing the back-tested returns to each strategy with a yearly rebalancing (click to enlarge):

Profit and Value is the clear slight winner. This is the obvious reason for preferring one strategy over another. It is not, however, the end of the story. There are some problems with the performance of Profit and Value, which I discuss in some detail later. Over the next few weeks I’ll post my full thoughts in a series of posts on the following headings, but, for now, here are the summaries. I welcome any feedback.

Determining “quality” using “gross profitability”

In a 2010 paper called “The other side of value: Good growth and the gross profitability premium,” author Robert Novy-Marks discusses his preference for “gross profitability” over other measures of performance like earnings, or free cash flow. The actual “Gross Profitability” factor Novy-Marx uses is as follows:

Gross Profitability = (Revenues – Cost of Goods Sold) / Total Assets

Novy-Marx’s rationale for preferring gross profitability is compelling. First, it makes sense:

Gross profits is the cleanest accounting measure of true economic profitability. The farther down the income statement one goes, the more polluted profitability measures become, and the less related they are to true economic profitability. For example, a firm that has both lower production costs and higher sales than its competitors is unambiguously more profitable. Even so, it can easily have lower earnings than its competitors. If the firm is quickly increasing its sales though aggressive advertising, or commissions to its sales force, these actions can, even if optimal, reduce its bottom line income below that of its less profitable competitors. Similarly, if the firm spends on research and development to further increase its production advantage, or invests in organizational capital that will help it maintain its competitive advantage, these actions result in lower current earnings. Moreover, capital expenditures that directly increase the scale of the firm’s operations further reduce its free cashflows relative to its competitors. These facts suggest constructing the empirical proxy for productivity using gross profits. Scaling by a book-based measure, instead of a market based measure, avoids hopelessly conflating the productivity proxy with book-to-market. I scale gross profits by book assets, not book equity, because gross profits are not reduced by interest payments and are thus independent of leverage.

Second, it works:

In a horse race between these three measures of productivity, gross profits-to-assets is the clear winner. Gross profits-to-assets has roughly the same power predicting the cross section of expected returns as book-to-market. It completely subsumes the earnings based measure, and has significantly more power than the measure based on free cash flows. Moreover, demeaning this variable dramatically increases its power. Gross profits-to-assets also predicts long run growth in earnings and free crashflow, which may help explain why it is useful in forecasting returns.

I think it’s interesting that gross profits-to-assets is as predictive as book-to-market. I can’t recall any other fundamental performance measure that is predictive at all, let alone as predictive as book-to-market (EBIT / (NPPE +net working capital) is not. Neither are gross margins, ROE, ROA, or five-year earnings gains). There are, however, some obvious problems with gross profitability as a stand alone metric. More later.

White knuckles: Profit and Value performance analysis

While Novy-Marx’s “Gross Profitability” factor seems to be predictive, in combination with the book-to-market value factor the results are very volatile. To the extent that an individual investor can ignore this volatility, the strategy will work very well. As an institutional strategy, however, Profit and Value is a widow-maker. The peak-to-trough drawdown on Profit and Value through the 2007-2009 credit crisis puts any professional money manager following the strategy out of business. Second, the strategy selects highly leveraged stocks, and one needs a bigger set of mangoes than I possess to blindly buy them. The second problem – the preference for highly leveraged stocks – contributes directly to the first problem – big drawdowns in a downturn because investors tend to vomit up highly leveraged stocks as the market falls. Also concerning is the likely performance of Profit and Value in an environment of rising interest rates. Given the negative rates that presently prevail, such an environment seems likely to manifest in the future. I look specifically at the performance of Profit and Value in an environment of rising interest rates.

A better metric than book-to-market

The performance issues with Profit and Value discussed above – the volatility and the preference for highly leveraged balance sheets – are problems with the book-to-market criterion. As Greenblatt points out in his “You can be a stockmarket genius” book, it is partially the leverage embedded in low book-to-market that contributes to the outperformance over the long term. In the short term, however, the leverage can be a problem. There are other problems with cheap book value. As I discussed in The Small Cap Paradox: A problem with LSV’s Contrarian Investment, Extrapolation, and Risk in practice, the low price-to-book decile is very small. James P. O’Shaughnessy discusses this issue in What works on Wall Street:

The glaring problem with this method, when used with the Compustat database, is that it’s virtually impossible to buy the stocks that account for the performance advantage of small capitalization strategies. Table 4-9 illustrates the problem. On December 31, 2003, approximately 8,178 stocks in the active Compustat database had both year-end prices and a number for common shares outstanding. If we sorted the database by decile, each decile would be made up of 818 stocks. As Table 4-9 shows, market capitalization doesn’t get past $150 million until you get to decile 6. The top market capitalization in the fourth decile is $61 million, a number far too small to allow widespread buying of those stocks.

A market capitalization of $2 million – the cheapest and best-performed decile – is uninvestable. This leads O’Shaughnessy to make the point that “micro-cap stock returns are an illusion”:

The only way to achieve these stellar returns is to invest only a few million dollars in over 2,000 stocks. Precious few investors can do that. The stocks are far too small for a mutual fund to buy and far too numerous for an individual to tackle. So there they sit, tantalizingly out of reach of nearly everyone. What’s more, even if you could spread $2,000,000 over 2,000 names, the bid–ask spread would eat you alive.

Even a small investor will struggle to buy enough stock in the 3rd or 4th deciles, which encompass stocks with market capitalizations below $26 million and $61 million respectively. These are not, therefore, institutional-grade strategies. Says O’Shaughnessy:

This presents an interesting paradox: Small-cap mutual funds justify their investments using academic research that shows small stocks outperforming large ones, yet the funds themselves cannot buy the stocks that provide the lion’s share of performance because of a lack of trading liquidity.

A review of the Morningstar Mutual Fund database proves this. On December 31, 2003, the median market capitalization of the 1,215 mutual funds in Morningstar’s all equity, small-cap category was $967 million. That’s right between decile 7 and 8 from the Compustat universe—hardly small.

I spent some time researching alternatives to book-to-market. As much as it pained me to do so, I’ve now abandoned book-to-market as my primary valuation metric. In fact I no longer use it all. I discuss these metrics, and their advantages over book in a later post.

Read Full Post »

I burned some digital ink on these pages discussing the utility of quantitative investment processes over more qualitative approaches. The thesis was, in essence, as follows:

  1. Simple statistical models outperform the judgements of the best experts
  2. Simple statistical models outperform the judgements of the best experts, even when those experts are given access to the simple statistical model.

The reason? Humans are fallible, emotional and subject to all sorts of biases. They perform better when they are locked into some process (see here, herehere and here for the wordier versions).

I also examined some research on the performance of quantitative funds and their more qualitative brethren. The findings were as one might expect given the foregoing:

[Ludwig] Chincarini [the author] finds that “both quantitative and qualitative hedge funds have positive risk-adjusted returns,” but, ”overall, quantitative hedge funds as a group have higher [alpha] than qualitative hedge funds.”

All well and good. And then Morningstar spoils the party with their take on the matter:

The ups and downs of stocks since the credit crisis began roiling the equity markets in 2007 haven’t been kind to most stock-fund managers. But those who use quantitative stock-picking models have had an especially difficult time.

What went wrong?

Many quant funds rely primarily on models that pick stocks based on value, momentum, and quality factors. Those that do have been hit by a double whammy lately. Value models let quants down first. Stocks that looked attractive to value models just kept getting cheaper in the depths of the October 2007-March 2009 bear market. “All kinds of value signals let you down, and they’re a key part of many quant models,” said Sandip Bhagat, Vanguard’s head of equities and a longtime quant investor.

Morningstar quotes Robert Jones of GSAM, who argues that “quant managers need more secondary factors”:

Robert Jones, former longtime head of Goldman Sachs Asset Management’s large quant team and now a senior advisor for the team, recently asserted in the Journal of Portfolio Management that both value and momentum signals have been losing their effectiveness as more quant investors managing more assets have entered the fray. Instead, he calls for quant managers to search for more-sophisticated and proprietary measures to add value by looking at less-widely available nonelectronic data, or data from related companies such as suppliers and customers. Other quants have their doubts about the feasibility of such developments. Vanguard’s Bhagat, for example, thinks quant managers need more secondary factors to give them the upper hand, but he also wonders how many new factors exist. “There are so many smart people sorting through the same data,” he said. Ted Aronson of quant firm Aronson+Johnson+Ortiz is more blunt: “We’re not all going to go out and stumble on some new source of alpha.”

Jones’s comments echo Robert Litterman’s refrain (also of GSAM) in Goldman Sachs says P/B dead-as-dead; Special sits and event-driven strategies the new black. Litterman argued that only special situations and event-driven strategies that focus on mergers or restructuring provide opportunities for profit:

What we’re going to have to do to be successful is to be more dynamic and more opportunistic and focus especially on more proprietary forecasting signals … and exploit shorter-term opportunistic and event-driven types of phenomenon.

Read Full Post »

In A Crisis In Quant Confidence*, Abnormal Returns has a superb post on Scott Patterson’s recounting in his book The Quants of the reactions of several quantitative fund managers to the massive reversal in 2007:

In 2007 everything seemed to go wrong for these quants, who up until this point in time, had been coining profits.

This inevitably led to some introspection on the part of these investors as they saw their funds take massive performance hits.  Nearly all were forced to reduce their positions and risks in light of this massive drawdown.  In short, these investors were looking at their models seeing where they went wrong.  Patterson writes:

Throttled quants everywhere were suddenly engaged in a prolonged bout of soul-searching, questioning whether all their brilliant strategies were an illusion, pure luck that happened to work during a period of dramatic growth, economic prosperity, and excessive leverage that lifted everyone’s boat.

Here Patterson puts his finger on the question that vexes anyone who has ever invested, made money for a time and then given some back: Does my strategy actually work or have I been lucky? It’s what I like to call The Fear, and there’s really no simple salve for it.

The complicating factor in the application of any investing strategy, and the basis for The Fear, is that even exceptionally well-performed strategies will both underperform the market and have negative periods that can extend for three, five or, on rare occasions, more years. Take, for example, the following back-test of a simple value strategy over the period 2002 to the present. The portfolio consisted of thirty stocks drawn from the Russell 3000 rebalanced daily and allowing 0.5% for slippage:

(Click to enlarge)

The simple value strategy returns a comically huge 2,450% over the 8 1/4 years, leaving the Russell 3000 Index in its wake (the Russell 3000 is up 9% for the entire period). 2,450% over the 8 1/4 years is an average annual compound return of 47%. That annual compound return figure is, however, misleading. It’s not a smooth upward ride at a 47% rate from 100 to 2,550. There are periods of huge returns, and, as the next chart shows, periods of substantial losses:

(Click to enlarge)

From January 2007 to December 2008, the simple value strategy lost 20% of its value, and was down 40% at its nadir. Taken from 2006, the strategy is square. That’s three years with no returns to show for it. It’s hard to believe that the two charts show the same strategy. If your investment experience starts in a down period like this, I’d suggest that you’re unlikely to use that strategy ever again. If you’re a professional investor and your fund launches into one of these periods, you’re driving trucks. Conversely, if you started in 2002 or 2009, your returns were excellent, and you’re genius. Neither conclusion is a fair one.

Abnormal Returns says of the correct conclusion to draw from performance:

An unexpectedly large drawdown may mark the failure of the model or may simply be the result of bad luck. The fact is that the decision will only be validated in hindsight. In either case it represents a chink in the armor of the human-free investment process. Ultimately every portfolio is run by a (fallible) human, whether they choose to admit it or not.

In this respect quantitative investing is not unlike discretionary investing. At some point every investor will face the choice of continuing to use their method despite losses or choosing to modify or replace the current methodology. So while quantitative investing may automate much of the investment process it still requires human input. In the end every quant model has a human with their hand on the power plug ready to pull it if things go badly wrong.

At an abstract, intellectual level, an adherence to a philosophy like value – with its focus on logic, discipline and character – alleviates some of the pain. Value answers the first part of the question above, “Does my strategy actually work?” Yes, I believe value works. The various academic studies that I’m so fond of quoting (for example, Value vs Glamour: A Global Phenomenon and Contrarian Investment, Extrapolation and Risk) confirm for me that value is a real phenomenon. I acknowledge, however, that that view is grounded in faith. We can call it logic and back-test it to an atomic level over an eon, but, ultimately, we have to accept that we’re value investors for reasons peculiar to our personalities, and not because we’re men and women of reason and rationality. It’s some comfort to know that greater minds have used the philosophy and profited. In my experience, however, abstract intellectualism doesn’t keep The Fear at bay at 3.00am. Neither does it answer the second part of the question, “Am I a value investor, or have I just been lucky?”

As an aside, whenever I see back-test results like the ones above (or like those in the Net current asset value and net net working capital back-test refined posts) I am reminded of Marcus Brutus’s oft-quoted line to Cassius in Shakespeare’s Julius Caesar:

There is a tide in the affairs of men,

Which, taken at the flood, leads on to fortune;

Omitted, all the voyage of their life

Is bound in shallows and in miseries.

As the first chart above shows, in 2002 or 2009, the simple value strategy was in flood, and lead on to fortune. Without those two periods, however, the strategy seems “bound in shallows and in miseries.” Brutus’s line seems apt, and it is, but not for the obvious reason. In the scene in Julius Caesar from which Brutus’s line is drawn, Brutus tries to persuade Cassius that they must act because the tide is at the flood (“On such a full sea are we now afloat; And we must take the current when it serves, Or lose our ventures.”). What goes unsaid, and what Brutus and Cassius discover soon enough, is that a sin of commission is deadlier than a sin of omission. The failure to take the tide at the flood leads to a life “bound in shallows and in miseries,” but taking the tide at the flood sometimes leads to death on a battlefield. It’s a stirring call to arms, and that’s why it’s quoted so often, but it’s worth remembering that Brutus and Cassius don’t see the play out.

* Yes, the link is to classic.abnormalreturns. I like my Abnormal Returns like I like my Coke.

Read Full Post »

As I’ve discussed in the past, P/B and P/E are demonstratively useful as predictors of future stock returns, and more so when combined (see, for example, LSV’s Two-Dimensional Classifications). As Josef Lakonishok, Andrei Shleifer, and Robert Vishny showed in Contrarian Investment, Extrapolation, and Risk, within the set of firms whose B/M ratios are the highest (in other words, the lowest price-to-book value), further sorting on the basis of another value variable – whether it be C/P, E/P or low GS – enhances returns. In that paper, LSV concluded that value strategies based jointly on past performance and expected future performance produce higher returns than “more ad hoc strategies such as that based exclusively on the B/M ratio.” A new paper further discusses the relationship between E/P and B/P from an accounting perspective, and the degree to which E/P and B/P together predict stock returns.

The CXO Advisory Group Blog, fast becoming one of my favorite sites for new investment research, has a new post, Combining E/P and B/P, on a December 2009 paper titled “Returns to Buying Earnings and Book Value: Accounting for Growth and Risk” by Francesco Reggiani and Stephen Penman. Penman and Reggiani looked at the relationship between E/P and B/P from an accounting perspective:

This paper brings an accounting perspective to the issue: earnings and book values are accounting numbers so, if the two ratios indicate risk and return, it might have something to do with accounting principles for measuring earnings and book value.

Indeed, an accounting principle connects earnings and book value to risk: under uncertainty, accounting defers the recognition of earnings until the uncertainty has largely been resolved. The deferral of earnings to the future reduces book value, reduces short-term earnings relative to book value, and increases expected long-term earnings growth.

CXO summarize the authors’ methodology and findings as follows:

Using monthly stock return and firm financial data for a broad sample of U.S. stocks spanning 1963-2006 (153,858 firm-years over 44 years), they find that:

  • E/P predicts stock returns, consistent with the idea that it measures risk to short-term earnings.
  • B/P predicts stock returns, consistent with the idea that it measures accounting deferral of risky earnings and therefore risk to both short-term and long-term earnings. This perspective disrupts the traditional value-growth paradigm by associating expected earnings growth with high B/P.
  • For a given E/P, B/P therefore predicts incremental return associated with expected earnings growth. A joint sort on E/P and B/P discovers this incremental return and therefore generates higher returns than a sort on E/P alone, attributable to additional risk (see the chart below).
  • Results are somewhat stronger for the 1963-1984 subperiod than for the 1985-2006 subperiod.
  • Results using consensus analyst forecasts rather than lagged earnings to calculate E/P over the 1977-2006 subperiod are similar, but not as strong.

CXO set out Penman and Reggiani’s “core results” in the following table (constructed by CXO from Penman and Reggiani’s results):

The following chart, constructed from data in the paper, compares average annual returns for four sets of quintile portfolios over the entire 1963-2006 sample period, as follows:

  • “E/P” sorts on lagged earnings yield.
  • “B/P” sorts on lagged book-to-price ratio.
  • “E/P:B/P” sorts first on E/P and then sorts each E/P quintile on B/P. Reported returns are for the nth B/P quintile within the nth E/P quintile (n-n).
  • “B/P:E/P” sorts first on B/P and then sorts each B/P quintile on E/P. Reported returns are for the nth E/P quintile within the nth B/P quintile (n-n).

Start dates for return calculations are three months after fiscal year ends (when annual financial reports should be available). The holding period is 12 months. Results show that double sorts generally enhance performance discrimination among stocks. E/P measures risk to short-term earnings and therefore short-term earnings growth. B/P measures risk to short-term earnings and earnings growth and therefore incremental earnings growth. The incremental return for B/P is most striking in low E/P quintile.

The paper also discusses in some detail a phenomenon that I find deeply fascinating, mean reversion in earnings predicted by low price-to-book values:

Research (in Fama and French 1992, for example) shows that book-to-price (B/P) also predicts stock returns, so consistently so that Fama and French (1993 and 1996) have built an asset pricing model based on the observation. The same discussion of rational pricing versus market inefficiency ensues but, despite extensive modeling (and numerous conjectures), the phenomenon remains a mystery. The mystery deepens when it is said that B/P is inversely related to earnings growth while positively related to returns; low B/P stocks (referred to as “growth” stocks) yield lower returns than high B/P stocks (“value” stocks). Yet investment professionals typically think of growth as risky, requiring higher returns, consistent with the risk-return notion that one cannot buy more earnings (growth) without additional risk.

(emphasis mine)

The paper adds further weight to the predictive ability of low price-to-book value and low price-to-earnings ratios. Its conclusion that book-to-price indicates expected returns associated with expected earnings growth is particularly interesting, and accords with the same findings in Werner F.M. DeBondt and Richard H. Thaler in Further Evidence on Investor Overreaction and Stock Market Seasonality.

Read Full Post »

One of the most interesting ideas suggested by Ian Ayers’s book Super Crunchers is the role of humans in the implementation of a quantitative investment strategy. As we know from Andrew McAfee’s Harvard Business Review blog post, The Future of Decision Making: Less Intuition, More Evidence, and James Montier’s 2006 research report, Painting By Numbers: An Ode To Quant, in context after context, simple statistical models outperform expert judgements. Further, decision makers who, when provided with the output of the simple statistical model, wave off the model’s predictions tend to make poorer decisions than the model. The reason? We are overconfident in our abilities. We tend to think that restraints are useful for the other guy but not for us. Ayres provides a great example in his article,  How computers routed the experts:

To cede complete decision-making power to lock up a human to a statistical algorithm is in many ways unthinkable.

The problem is that discretionary escape hatches have costs too. In 1961, the Mercury astronauts insisted on a literal escape hatch. They balked at the idea of being bolted inside a capsule that could only be opened from the outside. They demanded discretion. However, it was discretion that gave Liberty Bell 7 astronaut Gus Grissom the opportunity to panic upon splashdown. In Tom Wolfe’s memorable account, The Right Stuff, Grissom “screwed the pooch” when he prematurely blew the 70 explosive bolts securing the hatch before the Navy SEALs were able to secure floats. The space capsule sank and Grissom nearly drowned.

The natural question, then, is, “If humans can’t even be trusted with a small amount of discretion, what role do they play in the quantitative investment scenario?”

What does all this mean for human endeavour? If we care about getting the best decisions overall, there are many contexts where we need to relegate experts to supporting roles in the decision-making process. We, like the Mercury astronauts, probably can’t tolerate a system that forgoes any possibility of human override, but at a minimum, we should keep track of how experts fare when they wave off the suggestions of the formulas. And we should try to limit our own discretion to places where we do better than machines.

This is in many ways a depressing story for the role of flesh-and-blood people in making decisions. It looks like a world where human discretion is sharply constrained, where humans and their decisions are controlled by the output of machines. What, if anything, in the process of prediction can we humans do better than the machines?

The answer is that we formulate the factors to be tested. We hypothesise. We dream.

The most important thing left to humans is to use our minds and our intuition to guess at what variables should and should not be included in statistical analysis. A statistical regression can tell us the weights to place upon various factors (and simultaneously tell us how, precisely, it was able to estimate these weights). Humans, however, are crucially needed to generate the hypotheses about what causes what. The regressions can test whether there is a causal effect and estimate the size of the causal impact, but somebody (some body, some human) needs to specify the test itself.

So the machines still need us. Humans are crucial not only in deciding what to test, but also in collecting and, at times, creating the data. Radiologists provide important assessments of tissue anomalies that are then plugged into the statistical formulas. The same goes for parole officials who judge subjectively the rehabilitative success of particular inmates. In the new world of database decision-making, these assessments are merely inputs for a formula, and it is statistics – and not experts – that determine how much weight is placed on the assessments.

In investment terms, this means honing the strategy. LSV Asset Management, described by James Montier as being a “fairly normal” quantitative fund (as opposed to being “rocket scientist uber-geeks”) and authors of the landmark Contrarian Investment, Extrapolation and Risk paper, describe the ongoing role of the humans in its funds as follows (emphasis mine):

A proprietary investment model is used to rank a universe of stocks based on a variety of factors we believe to be predictive of future stock returns. The process is continuously refined and enhanced by our investment team although the basic philosophy has never changed – a combination of value and momentum factors.

The blasphemy about momentum aside, the refinement and enhancement process sounds like fun to me.

Read Full Post »

« Newer Posts - Older Posts »

%d bloggers like this: