The excellent Empirical Finance Blog has a superb series of posts on an investment strategy called “Profit and Value” (How “Magic” is the Magic Formula? and The Other Side of Value), which Wes describes as the “academic version” of Joel Greenblatt’s “Magic Formula.” (Incidentally, Greenblatt is speaking at the New York Value Investors Congress in October this year. I think seeing Greenblatt alone is worth the price of admission.) The Profit and Value approach is similar to the Magic Formula in that it ranks stocks independently on “value” and “quality,” and then reranks on the combined rankings. The stock with the lowest combined ranking is the most attractive, the stock with the next lowest combined ranking the next most attractive, and so on.
The Profit and Value strategy differs from the Magic Formula strategy in its methods of determining value and quality. Profit and Value uses straight book-to-market to determine value, where the Magic Formula uses EBIT / TEV. And where the Magic Formula uses EBIT / (NPPE +net working capital) to determine quality, Profit and Value uses “Gross Profitability,” a metric described in a fascinating paper by Robert Novy-Marx called “The other side of value” (more on this later).
My prima facie attraction to the Profit and Value strategy was twofold: First, Profit and Value uses book-to-market as the measure of value. I have a long-standing bias for asset-based metrics over income-based ones, and for good reasons. (After examining the performance analysis of Profit and Value, however, I’ve made a permanent switch to another metric that I’ll discuss in more detail later.) Secondly, the back-tested returns to the strategy appear to be considerably higher than those for the Magic Formula. Here’s a chart from Empirical Finance comparing the back-tested returns to each strategy with a yearly rebalancing (click to enlarge):
Profit and Value is the clear slight winner. This is the obvious reason for preferring one strategy over another. It is not, however, the end of the story. There are some problems with the performance of Profit and Value, which I discuss in some detail later. Over the next few weeks I’ll post my full thoughts in a series of posts on the following headings, but, for now, here are the summaries. I welcome any feedback.
Determining “quality” using “gross profitability”
In a 2010 paper called “The other side of value: Good growth and the gross profitability premium,” author Robert Novy-Marks discusses his preference for “gross profitability” over other measures of performance like earnings, or free cash flow. The actual “Gross Profitability” factor Novy-Marx uses is as follows:
Gross Profitability = (Revenues – Cost of Goods Sold) / Total Assets
Novy-Marx’s rationale for preferring gross profitability is compelling. First, it makes sense:
Gross profits is the cleanest accounting measure of true economic profitability. The farther down the income statement one goes, the more polluted profitability measures become, and the less related they are to true economic profitability. For example, a firm that has both lower production costs and higher sales than its competitors is unambiguously more profitable. Even so, it can easily have lower earnings than its competitors. If the firm is quickly increasing its sales though aggressive advertising, or commissions to its sales force, these actions can, even if optimal, reduce its bottom line income below that of its less profitable competitors. Similarly, if the firm spends on research and development to further increase its production advantage, or invests in organizational capital that will help it maintain its competitive advantage, these actions result in lower current earnings. Moreover, capital expenditures that directly increase the scale of the firm’s operations further reduce its free cashflows relative to its competitors. These facts suggest constructing the empirical proxy for productivity using gross profits. Scaling by a book-based measure, instead of a market based measure, avoids hopelessly conflating the productivity proxy with book-to-market. I scale gross profits by book assets, not book equity, because gross profits are not reduced by interest payments and are thus independent of leverage.
Second, it works:
In a horse race between these three measures of productivity, gross profits-to-assets is the clear winner. Gross profits-to-assets has roughly the same power predicting the cross section of expected returns as book-to-market. It completely subsumes the earnings based measure, and has significantly more power than the measure based on free cash flows. Moreover, demeaning this variable dramatically increases its power. Gross profits-to-assets also predicts long run growth in earnings and free crashflow, which may help explain why it is useful in forecasting returns.
I think it’s interesting that gross profits-to-assets is as predictive as book-to-market. I can’t recall any other fundamental performance measure that is predictive at all, let alone as predictive as book-to-market (EBIT / (NPPE +net working capital) is not. Neither are gross margins, ROE, ROA, or five-year earnings gains). There are, however, some obvious problems with gross profitability as a stand alone metric. More later.
White knuckles: Profit and Value performance analysis
While Novy-Marx’s “Gross Profitability” factor seems to be predictive, in combination with the book-to-market value factor the results are very volatile. To the extent that an individual investor can ignore this volatility, the strategy will work very well. As an institutional strategy, however, Profit and Value is a widow-maker. The peak-to-trough drawdown on Profit and Value through the 2007-2009 credit crisis puts any professional money manager following the strategy out of business. Second, the strategy selects highly leveraged stocks, and one needs a bigger set of mangoes than I possess to blindly buy them. The second problem – the preference for highly leveraged stocks – contributes directly to the first problem – big drawdowns in a downturn because investors tend to vomit up highly leveraged stocks as the market falls. Also concerning is the likely performance of Profit and Value in an environment of rising interest rates. Given the negative rates that presently prevail, such an environment seems likely to manifest in the future. I look specifically at the performance of Profit and Value in an environment of rising interest rates.
A better metric than book-to-market
The performance issues with Profit and Value discussed above – the volatility and the preference for highly leveraged balance sheets – are problems with the book-to-market criterion. As Greenblatt points out in his “You can be a stockmarket genius” book, it is partially the leverage embedded in low book-to-market that contributes to the outperformance over the long term. In the short term, however, the leverage can be a problem. There are other problems with cheap book value. As I discussed in The Small Cap Paradox: A problem with LSV’s Contrarian Investment, Extrapolation, and Risk in practice, the low price-to-book decile is very small. James P. O’Shaughnessy discusses this issue in What works on Wall Street:
The glaring problem with this method, when used with the Compustat database, is that it’s virtually impossible to buy the stocks that account for the performance advantage of small capitalization strategies. Table 4-9 illustrates the problem. On December 31, 2003, approximately 8,178 stocks in the active Compustat database had both year-end prices and a number for common shares outstanding. If we sorted the database by decile, each decile would be made up of 818 stocks. As Table 4-9 shows, market capitalization doesn’t get past $150 million until you get to decile 6. The top market capitalization in the fourth decile is $61 million, a number far too small to allow widespread buying of those stocks.
A market capitalization of $2 million – the cheapest and best-performed decile – is uninvestable. This leads O’Shaughnessy to make the point that “micro-cap stock returns are an illusion”:
The only way to achieve these stellar returns is to invest only a few million dollars in over 2,000 stocks. Precious few investors can do that. The stocks are far too small for a mutual fund to buy and far too numerous for an individual to tackle. So there they sit, tantalizingly out of reach of nearly everyone. What’s more, even if you could spread $2,000,000 over 2,000 names, the bid–ask spread would eat you alive.
Even a small investor will struggle to buy enough stock in the 3rd or 4th deciles, which encompass stocks with market capitalizations below $26 million and $61 million respectively. These are not, therefore, institutional-grade strategies. Says O’Shaughnessy:
This presents an interesting paradox: Small-cap mutual funds justify their investments using academic research that shows small stocks outperforming large ones, yet the funds themselves cannot buy the stocks that provide the lion’s share of performance because of a lack of trading liquidity.
A review of the Morningstar Mutual Fund database proves this. On December 31, 2003, the median market capitalization of the 1,215 mutual funds in Morningstar’s all equity, small-cap category was $967 million. That’s right between decile 7 and 8 from the Compustat universe—hardly small.
I spent some time researching alternatives to book-to-market. As much as it pained me to do so, I’ve now abandoned book-to-market as my primary valuation metric. In fact I no longer use it all. I discuss these metrics, and their advantages over book in a later post.
[…] We found the idea for the quality metric in an academic paper by Robert Novy-Marx called The Other Side of Value: Good Growth and the Gross Profitability Premium. Quality and Price substitutes for the Magic Formula’s ROIC a quality measure called […]
LikeLike
[…] We found the idea for the quality metric in an academic paper by Robert Novy-Marx called The Other Side of Value: Good Growth and the Gross Profitability Premium. Quality and Price substitutes for the Magic Formula’s ROIC a quality measure called […]
LikeLike
[…] We found the idea for the quality metric in an academic paper by Robert Novy-Marx called The Other Side of Value: Good Growth and the Gross Profitability Premium. The price ratio is drawn from the early research into value investment by Eugene Fama and Ken […]
LikeLike
This guy has a very simple growth model with three factor that is also has a good performance.i think dicipline is the key in quantitive investing.
http://www.charleskirkpatrick.com/
LikeLike
Is he a technician?
LikeLike
Yes,but his model is based on Earning momentum and valuation and a technical filter which is price momentum.
there is also,another guy that uses quant but a fusion model.he has also an excllent recored and he just authored a book that explains his methods.
http://www.amazon.com/Fusion-Analysis-Fundamental-Technical-Risk-Adjusted/dp/0071629386.
the key take away for me is that regardless of your investment method a Quant filter adds some dicipline and value to the investment process.
LikeLike
[…] Novy-Marx, whose The Other Side of Value paper we quoted from extensively in Quantitative Value, has produced another ripping paper […]
LikeLike
[…] Next, this is a bit older, but an interesting look at my favorite metric (book to market) and why it may not be investable. […]
LikeLike
This seems like a superior strategy because p/s isn’t marred by accounting measures of book value. Of course you would have to compare stocks within the same industry, a low p/s for a supermarket is not the same as for software co. I think you’ve mentioned p/s before, but I couldn’t find it on your site.
http://www.businessinsider.com/oshaughnessys-cornerstone-growth-screen-beating-the-market-through-relative-strength-and-price-to-sales-2011-4
LikeLike
[…] Greenbacked is back […]
LikeLike
Maybe it’s just me, but I find table 4-9 a little confusing to all the points you are making. Can you explain what it is showing? It looks like O’Shaughnessy sorted the market by market cap size, right? His/your point being there are a very large number of very small stocks relative to the number of larger, more investable stocks in the investment universe?
I guess to tie it all together, based on this and some of your prior posts, what is the market cap range of the best performing deciles on a price/book basis?
LikeLike
Table 4-9 shows the largest stock by market capitalization for each of the deciles ranked by price-to-book value. The best performed decile is Decile 1, which had a maximum market capitalization of $2 million in 2005 (and is likely to be of a similar size now). The table shows that most of the performance of price-to-book value comes from companies that are too small for most institutional investors.
LikeLike
Are you absolutely sure? I got my hands on the 3rd edition, and where this table is shown (Chapter 4, Ranking Stocks By Market Capitalization: Size Matters) there is no mention of the price/book ratio (or any other valuation metric for that matter) anywhere in this chapter. All I see are charts and tables of returns broken out by cap size.
Back to the drawing table? I hope so anyway. While the net-net opportunities of the crisis may be gone for good, your analysis/effort is certainly better than average; why not apply this to the mid-lower p/b deciles? I’m confident you will find bargains over time…
LikeLike
You’re right. It refers only to market capitalization in the context of small capitalization stocks, and not in reference to price-to-book value. A thousand apologies.
LikeLike
[…] An examination of the “Profit and Value” strategy […]
LikeLike
You write: “Secondly, the back-tested returns to the strategy appear to be considerably higher than those for the Magic Formula.” and “Profit and Value is the clear winner.’
Is it though? According to empircalfinance, he says underneath this same chart – “Again, slight edge for profit and value, but very little overall difference.” I’m confused, as this chart shows the MF returns are quite comparable to the Profit and Value results…
Also, in an earlier post you show the results of a P/TB broken out by decile here(https://greenbackd.com/2010/10/29/donald-g-smith-in-the-fall-2010-graham-and-doddsville-newsletter/). Do you know the source of this data? (I didn’t see it cited in the newsletter) I would be curious to know the market cap ranges across these deciles. Even if the smallest of the microcap stocks capture the greatest returns over time, I would like to think there are some somewhat larger, investable securities in the lower P/TBV deciles…
LikeLike
You make a fair point. It’s a small differential on an annualized basis, but it’s clearly in Profit and Value’s favor. The point I was trying to make in the paragraph was that, while Profit and Value’s return seems to be higher than the Magic Formula’s, that’s only half the story. An examination of the return shows various problems that I will discuss in a future post.
As to the source of the p/tb data in that chart, I have no idea. In my own fiddling with backtest data I have found that p/tb underperforms p/b. I don’t know why.
LikeLike
If you don’t use book-to-market again, how will you apply the new method to financials?
LikeLike
Good question. I ignore them. From recollection, most academic papers also ignore financials because they don’t submit easily to this kind of analysis. I think you need some specialized knowledge, which I don’t possess.
LikeLike
Mr Graham faced the same problem for years. However, at one point, he suggested that the “same arithmetical standards for price in relation to earnings and book value” be applied to financial enterprises as those for industrials. See page 196 of Intelligent Investor 4th revised edition. That statement appears in the context of portfolio selection for a defensive investor, but I think it applies equally for an enterprising investor.
Speaking for myself, I’ve had some luck with buying some small financials for prices less than 66.67% of NTA per share. They are very hard to find and so its difficult to construct a sample big enough to test the hypothesis. Still, I’ve had some luck so far in this area.
LikeLike
great post! this is very interesting stuff, love the use of GM as your profit metric,. Your point about a rising interest rate environment is a good one too.
LikeLike
[…] Abandoning book-to-market as a valuation metric. (Greenbackd) […]
LikeLike
Great to have you back!
I am curious to hear your thoughts regarding the backtesting of the Magic Formula that you included from Empirical Finance Blog. The backtest indicates that the magic formula only outperforms the market by about 3% per year, which is far less than the 8% to 18% outperformance that Greenblatt indicates in his book. What do you think is the most likely explanation for the discrepancy? Is it a data issue? Is the analysis being done differently? Is someone making a mistake?
I’d love to hear your thoughts.
LikeLike
That’s a good question, and the short answer is, “I don’t know.” I’ve done a great deal of backtesting, and believe that, for the most part, all results should be treated skeptically. Putting aside for the moment issues about the data, the results are rarely reliable. Small changes in start dates, market capitalization, or frequency of rebalancing can have a huge impact on the results. For example, if it starts at a market low, the results will not reflect the usual experience. This is true also if a test rebalances on a market low, e.g. rebalancing annually on March 1 will pick the bottom in 2009, and the results will be much better than a test rebalancing on July 1. Any test that Includes small cap stocks – i.e. anything outside the Russell 3000 – will have bid/ask issues, which make results better than could be achieved in practice. Further, the stocks selected are often uninvestable because they’re too small, which O’Shaugnessy discusses in the excerpt I’ve highlighted in the post.
Wes observed this specifically in relation to the Magic Formula and has a post on it at Empirical called More Magic Formula Analysis, where he writes:
“[We] can’t replicate the results under a variety of methods.
We’ve hacked and slashed the data, dealt with survivor bias, point-in-time bias, erroneous data, and all the other standard techniques used in academic empirical asset pricing analysis–still no dice.
…
So what gives?
There are a list of possible conclusions that we can draw from this analysis:
* We screwed something up in our analysis.
* Greenblatt & Co. screwed something up in their analysis.
* The strategy is highly unstable (i.e., small backtesting procedure changes have large effects).
…
[It] is obvious that ‘magic small-caps’ are driving the backtested performance here. As one can see, the live magic formula dramatically underperforms the backtested performance (likely because they had limited small/mid cap exposure).
Although anecdotal in nature, we can see from a very limited out of sample test (2009 and 2010) that the backtest returns to a Magic Formula strategy is VERY unstable and results should be analyzed with a skeptical eye.”
LikeLike
Thanks for bring up the contrast between asset-based metrics and income-based ones. Lately I came across an Interactive Investor blog post (http://blog.iii.co.uk/reward-without-the-risk/) and was thinking about the transition among growth, value, and balance sheet during a business cycle. In brief, investors favor value stocks in early bull market, growth stocks in late bull market, and balance sheet in bear market. Now it’s clear to me that investors are just swinging between *asset* and *income*. In a bear market, investors are conservative, so they favor asset and chase solid balance sheet. In a bull market, investors are aggressive, so they favor income and chase growth. And value is just a transitory choice in between.
LikeLike
P/B is predictive in the aggregate, but possesses qualities that are unattractive to me, for example, preferring highly leveraged balance sheets. LSV (among others) examined P/B in their Contrarian Investment, Extrapolation, and Risk paper, and found that it was predictive (Of course this finding needs to be tempered by O’Shaugnessy’s observation that the smaller deciles are uninvestable because they are too small). For me, it’s a matter of finding a metric that avoids P/B’s unattractive qualities. I have some other research on another metric that produces returns in the same magnitude as P/B, but avoids the highly leveraged balance sheets (it actually prefers a unleveraged, cash-rich balance sheet). I would still not, however, use it in isolation. LSV note in the same paper discussed above that using another metric (cash flow-to-price (C/P), earnings-to-price (E/P), and 5-year average growth rate of sales (GS) alongside P/B improves its returns. I think that’s the obvious solution to avoid the temptation to swing from assets to earnings etc because you’d be using it all the time.
LikeLike
Yes, Indeed.
Of course a better way is to know where we are and use the “right” formula accordingly. But I guess few have this capability. LOL
Then a couple of ideas popped up while I was typing.
1. Statistically bear market is 1/3 in length of bull market. So maybe we use both, but assign a higher weight to income and a lower weight to asset.
2. Focus on income, and use asset as a cut off screener. This will offer some downside protection during bear market. (Actually I have a SeekingAlpha article about this “Cloud Computing: Design a Portfolio for the Best, Normal and Worst” http://seekingalpha.com/article/274555-cloud-computing-design-a-portfolio-for-the-best-normal-and-worst)
3. Hedge. A simple one is to hedge with a short position of SPY when SPY is below 200 MA. Or rather stay out of market when SPY is below 200 MA.
The list can be expanded further. I think the key is that we don’t want a formula to handle all problems for us.
Thx for your inspiring reply!
LikeLike
I’m a little bit confused on your comment “avoids the highly leveraged balance sheets” of low P/B. A highly leveraged balance sheet means a lot debt but a little bit equity, but then it should have a high P/B…
Could you please elaborate? Thanks.
LikeLike
Equity/book value is assets minus liabilities. Low P/B stocks tend to be securities where the market thinks the assets are overvalued at historic cost, or there is some risk due to the size of the liabilities. Either way, the size of the liabilities relative to the equity is an issue. Low P/B does not imply that there will be some residue for equity in a liquidation. Often it means that a small drop in the value of the assets will destroy the equity value.
LikeLike
I see. Allow me to rephrase.
Both asset and liability exist in real world, while equity only exists in accounting.
Asset is physical stuff whose spot “value” is determined by the market in liquidation, while liability is a sum of cash whose value is always stable.
For example, a bank borrowed fund to buy mortgage backed security. The fund it borrowed is liability, and MBS is asset. In 2008, 2009 crisis, the value of the liability (the borrowed fund) was stable because it is always the same amount of cash, but the value of MBS collapsed. However, a bank may still account the MBS by “acquiring cost”.
In this case, a low P/B means the market consensus thinks the company’s equity will collapse (to match the low price), because its asset’s value will collapse.
It is also the case that, in normal market condition the more the company’s balance sheet is leveraged by debt, the higher its equity.
Still using the same example, in normal market condition, a bank borrows $100 to buy MBS, suppose the market price of the MBS is $101. So with $100 in debt, it “generates” $1 equity. If it borrows $1M, it “generates” $10K equity. In this case the equity is the result of pure leverage.
LikeLike
Great post and discussion. Using gross profitability to measure quality is great. GP has the advantage over EBIT/net fixed assets + net working capital in that it simpler to understand and calculate while still being predictive.
What is the metric you are using in place of P/B to measure value? I look forward to seeing that post and the related discussion.
LikeLike
Interesting post – thanks. I wonder if this might work better if you use some form of net operating assets or invested capital in the denominator. This would make sense because businesses use liabilities to offset capital intensity and increase return per unit of capital – the asset side of the balance sheet does not tell the whole story. I think you would also want to take out non operating balance sheet items, such as cash (on the asset side) and interest bearing debt on the liability side to get to net operating assets – these are products of capital allocation, not operations… I guess this gets close to an ROIC measure, but like you mentioned gross profit is much “cleaner” so I imagine this measure would be better at predicting operational profitability and hence future returns.
LikeLike
Glad your back Mate! been too long.
LikeLike