Archive for the ‘Value Investment’ Category

Further to my point that if your valuation models use forward estimates rather than twelve-month trailing data, you’re doing it wrong, here are the results of our Quantitative Value backtest on the use of consensus Institutional Brokers’ Estimate System (I/B/E/S) earnings forecasts of EPS for the fiscal year (available 1982 through 2010) for individual stock selection:

We analyze the compound annual growth rates of each price ratio over the 1964 to 2011 period for market capitalization–weighted decile portfolios.

The forward earnings estimate is the worst performed metric by a wide margin. The performance of the forward earnings estimate is uniformly poor, earning a compound annual growth rate of just 8.63 percent on average and underperforming the Standard & Poor’s (S&P) 500 by almost 1 percent per year. Investors are wise to shy away from analyst forward earnings estimates when making investment decisions.

We focus our analysis on historical valuation metrics in Quantitative Value and leave the forward earnings estimates to the promoters on Wall Street.

Read Full Post »

If your valuation models use forward estimates rather than twelve-month trailing data, you’re doing it wrong. Why? As we discussed in Quantitative Value, analysts are consistently too optimistic about the future, and so systematically overestimate forward earnings figures.

They are consistently, systematically, predictably ignorant of mean-reverting base rates. As we wrote in the book:

Exceptions to the long pattern of excessively optimistic forecasts are rare. Only in 1995 and 2004 to 2006, when strong economic growth generated earnings that caught up with earlier predictions, do forecasts actually hit the mark. When economic growth accelerates, the size of the forecast error declines; when economic growth slows, it increases.

This chart from JP Morgan Asset Management as of a week ago shows the chronic overestimation of operating earnings:

The chart comes via Zero Hedge, where they ask, “Is the market cheap?” My answer is not on the basis of the Shiller PE, which stands at 23.7 versus the long run arithmetic mean of 16.47 or around 40 percent overvalued. Neither is it cheap on the basis of Tobin’s q. Smither’s & Co. has it at 44 percent overvalued on the basis of q, and they note:

As at 12th March, 2013 with the S&P 500 at 1552 the overvaluation by the relevant measures was 57% for non-financials and 65% for quoted shares.

Although the overvaluation of the stock market is well short of the extremes reached at the year ends of 1929 and 1999, it has reached the other previous peaks of 1906, 1936 and 1968.

How about the single year P/E ratio as reported? The S&P 500 TTM P/E stands at 18 versus the long run mean of 15.49. But it’s cool because the “E” is growing, right? Err, no. The “E” peaked in February last year (see Standard & Poor’s current S&P 500 Earnings, go to “Download Index Data,” and select “Index Earnings”). The multiple will now have to expand just to keep the market where it is. You have to do these sort of acrobatics to get it going up:

Margins are now going to bounce free of the wreckage like those few lucky souls who remember to assume the brace position before the plane hits the ground, even though the as reported rolled over a year ago (I hope Denzel Washington is flying this plane).

So how is it cheap?

It’s at 14.5 on the basis of twelve-month forward operating earnings estimates versus a long run mean of 15.49. You gotta do what you gotta do to get the Muppets to buy.

Good luck with that.

Read Full Post »

In the How to beat The Little Book That Beats The Market (Part 1 2, and 3) series of posts I showed how in Quantitative Value we tested Joel Greenblatt’s Magic Formula (outlined in The Little Book That (Still) Beats the Market) and found that it had consistently outperformed the market, and with lower relative risk than the market.

We sought to improve on it by creating a generic, academic alternative that we called “Quality and Price.” Quality and Price is the academic alternative to the Magic Formula because it draws its inspiration from academic research papers. We found the idea for the quality metric in an academic paper by Robert Novy-Marx called The Other Side of Value: Good Growth and the Gross Profitability Premium. Quality and Price substitutes for the Magic Formula’s ROIC a quality measure called gross profitability to total assets (GPA), defined as follows:

GPA = (Revenue − Cost of Goods Sold) / Total Assets

In Quality and Price, the higher a stock’s GPA, the higher the quality of the stock.

The price ratio, drawn from the early research into value investment by Eugene Fama and Ken French, is book value-to-market capitalization (BM), defined as follows:

BM = Book Value / Market Price

The Quality and Price strategy, like the Magic Formula, seeks to differentiate between stocks by equally weighting the quality and price metrics. Can we improve performance by seeking higher quality stocks in the value decile, rather than equal weighting the two factors?

In his paper The Quality Dimension of Value Investing, Novy-Marx considered this question. Novy-Marx’s rationale:

Value investors can also improve their performance by controlling for quality when investing in value stocks. Traditional value strategies formed on price signals alone tend to be short quality, because cheap firms are on average of lower quality than similar firms trading at higher prices. Because high quality firms on average outperform low quality firms, this quality deficit drags down the returns to traditional value strategies. The performance of value strategies can thus be significantly improved by explicitly controlling for quality when selecting stocks on the basis of price. Value strategies that buy (sell) cheap (expensive) firms from groups matched on the quality dimension significantly outperform value strategies formed solely on the basis of valuations.

His backtest method:

The value strategy that controls for quality is formed by first sorting the 500 largest financial firms each June into 10 groups of 50 on the basis of the quality signal. Within each of these deciles, which contain stocks of similar quality, the 15 with the highest value signals are assigned to the high portfolio, while the 15 with the lowest value signals are assigned to the low portfolio. This procedure ensures that the value and growth portfolios, which each hold 150 stocks, contain stocks of similar average quality.

Novy-Marx finds that the strategy “dramatically outperform[s]” portfolios formed on the basis of quality or value alone, but underperforms the Greenblatt-style joint strategy. From the paper:

The long/short strategy generated excess returns of 45 basis points per month, 50% higher than the 31 basis points per month generated by the unconditional quality strategy, despite running at lower volatility (10.4% as opposed to 12.2%). The long side outperformed the market by 32 basis points per month, 9 basis points per month more than the long-only strategy formed without regard for price. It managed this active return with a market tracking error volatility of only 5.9%, realizing an information ratio of 0.63, much higher than the information ratio of 0.42 realized on the tracking error of the unconditional long-only value strategy.

For comparison, Novy-Marx finds the Greenblatt-style joint 50/50 weighting generates higher returns:

The long/short strategy based on the joint quality and value signal generated excess returns of 61 basis points per month, twice that generated by the quality or value signals alone and a third higher than the market, despite running at a volatility of only 9.7%. The strategy realized a Sharpe ratio 0.75 over the sample, almost two and a half times that on the market over the same period, despite trading exclusively in the largest, most liquid stocks.

The long side outperformed the market by 35 basis points per month, with a tracking error volatility of only 5.7 percent, for a realized information ratio of 0.75. This information ratio is 15% higher than the 0.65 achieved running quality and value side by side. Just as importantly, it allows long-only investors to achieve a greater exposure to the high information ratio opportunities provided by quality and value. While the strategy’s 5.7% tracking error still provides a suboptimally small exposure to value and quality, this exposure is significantly larger than the long-only investor can obtain running quality alongside value.

And a pretty chart from the paper:

Novy-Marx 2.1

We tested the decile approach and the joint approach in Quantitative Value, substituting better performing value metrics and found different results. I’ll cover those next.

Read Full Post »

Wes sent through this outstanding more-than-30-year-old speech, Trying Too Hard (.pdf), which foreshadows many of the ideas we discuss in Quantitative Value, so much so that I feel that I should point out that neither Wes nor I had read it before we wrote the book. The speaker, Dean Williams, named the speech for this story:

I had just completed what I thought was some fancy footwork involving buying and selling a long list of stocks. The oldest member of Morgan’s trust committee looked down the list and said, “Do you think you might be trying too hard?” A the time I thought, “Who ever heard of trying too hard?” Well, over the years I’ve changed my mind about that. Tonight I’m going to ask you to entertain some ideas shoe theme is this: We probably are trying too hard at what we do. More than that, no matter how hard we try, we may not be as important to the results as we’d like to think we are.

The speech covers the following themes, among others:

  • Prediction

…[M]ost of us spend a lot of out time doing something that human beings just don’t do very well. Predicting things.

  • Forecasting, information, and accuracy

Confidence in a forecast rises with the amount of information that goes into it. But the accuracy of the forecast stays the same. 

  • Expertise and forecasting

And when it comes to forecasting – as opposed to doing something – a lot of expertise is no better than a little expertise. And may be even worse.

  • The importance of mean reversion

If there is a reliable and helpful principle at works in our markets, my choice would be the ones the statisticians call “regression to the mean”. The tendency toward average profitability is a fundamental, if not the fundamental principle of competitive markets.

It can be a powerful investment tool. It can, almost by itself, select cheap portfolios and avoid expensive ones.

  • Simplicity

Simple approaches. Albert Einstein said that “… most of the fundamental ideas of science are essentially simple and may, as a rule, be expressed in a language comprehensible to everyone“.

  • Consistency

Look at the best performing funds for the past ten years or more. Templeton, Twentieth Century Growth, Oppenheimer Special, and others. What did they have in common?

It was that whatever their investment plans were, they had the discipline and good sense to carry them out consistently.

  • And finally, value

Spend your time measuring value instead of generating information. Don’t forecast. Buy what’s cheap today.

Read Trying Too Hard (.pdf). You won’t regret it.

h/t/ The Turnkey Analyst

Read Full Post »

In How to Beat The Little Book That Beats The Market: Redux I showed how in Quantitative Value we tested Joel Greenblatt’s Magic Formula outlined in The Little Book That (Still) Beats the Market). We found that Greenblatt’s Magic Formula has consistently outperformed the market, and with lower relative risk than the market, but wondered if we could improve on it.

We created a generic, academic alternative to the Magic Formula that we call “Quality and Price.” Quality and Price is the academic alternative to the Magic Formula because it draws its inspiration from academic research papers. We found the idea for the quality metric in an academic paper by Robert Novy-Marx called The Other Side of Value: Good Growth and the Gross Profitability Premium. The price ratio is drawn from the early research into value investment by Eugene Fama and Ken French. The Quality and Price strategy, like the Magic Formula, seeks to differentiate between stocks on the basis of … wait for it … quality and price. The difference, however, is that Quality and Price uses academically based measures for price and quality that seek to improve on the Magic Formula’s factors, which might provide better performance.

The Magic Formula uses Greenblatt’s version of return on invested capital (ROIC) as a proxy for a stock’s quality. The higher the ROIC, the higher the stock’s quality and the higher the ranking received by the stock. Quality and Price substitutes for ROIC a quality measure we’ll call gross profitability to total assets (GPA). GPA is defined as follows:

GPA = (Revenue − Cost of Goods Sold) / Total Assets

In Quality and Price, the higher a stock’s GPA, the higher the quality of the stock. The rationale for using gross profitability, rather than any other measure of profitability like earnings or EBIT, is simple. Gross profitability is the “cleanest” measure of true economic profitability. According to Novy-Marx:

The farther down the income statement one goes, the more polluted profi tability measures become, and the less related they are to true economic profi tability. For example, a firm that has both lower production costs and higher sales than its competitors is unambiguously more profitable. Even so, it can easily have lower earnings than its competitors. If the firm is quickly increasing its sales though aggressive advertising, or commissions to its sales force, these actions can, even if optimal, reduce its bottom line income below that of its less profitable competitors. Similarly, if the firm spends on research and development to further increase its production advantage, or invests in organizational capital that will help it maintain its competitive advantage, these actions result in lower current earnings. Moreover, capital expenditures that directly increase the scale of the firm’s operations further reduce its free cash flows relative to its competitors. These facts suggest constructing the empirical proxy for productivity using gross profits.

The Magic Formula uses EBIT/TEV as its price measure to rank stocks. For Quality and Price, we substitute the classic measure in finance literature – book value-to-market capitalization (BM):

BM = Book Value / Market Price

 We use BM rather than the more familiar price-to-book value or (P/B) notation because the academic convention is to describe it as BM, and it makes it more directly comparable with the Magic Formula’s EBIT/TEV. The rationale for BM capitalization is straightforward. Eugene Fama and Ken French consider BM capitalization a superior metric because it varies less from period to period than other measures based on income:

We always emphasize that different price ratios are just different ways to scale a stock’s price with a fundamental, to extract the information in the cross-section of stock prices about expected returns. One fundamental (book value, earnings, or cashflow) is pretty much as good as another for this job, and the average return spreads produced by different ratios are similar to and, in statistical terms, indistinguishable from one another. We like [book-to-market capitalization] because the book value in the numerator is more stable over time than earnings or cashflow, which is important for keeping turnover down in a value portfolio.

Next I’ll compare show the results of our examination of Quality and Price strategy to the Magic Formula. If you can’t wait, you can always pick up a copy of Quantitative Value.

Read Full Post »

In a new paper, Using Maximum Drawdowns to Capture Tail Risk, my Quantitative Value co-author Wes Gray and Jack Vogel propose a new easily measurable and intuitive tail-risk measure that they call “maximum drawdown.” Maximum drawdown is the maximum peak-to-trough loss across a time series of compounded returns. From the abstract:

We look at maximum drawdowns to assess tail risks associated with market neutral strategies identified in the academic literature. Our evidence suggests that academic anomalies are not anomalous: all strategies endure large drawdowns at some point in the time series. Many of these losses would trigger margin calls and investor withdrawals, forcing an investor to liquidate.

The authors apply their maximum drawdown metric to existing studies, for example, momentum anomaly originally outlined in Jegadeesh and Titman (1993) to demonstrate why maximum drawdown adds to the analyses:

Jegadeesh and Titman claim large alphas associated with long/short momentum strategies over the 1965 to 1989 time period. What these authors fail to mention is that the long/short strategy endures a 33.81% holding period loss from July 1970 until March 1971. When we look out of sample from 1989 to 2012, there is still significant alpha associated with the long/short momentum strategy, but the strategy endures an 86.05% loss from March 2009 to September 2009. An updated momentum study reporting alpha estimates would claim victory, an investor engaged in the long/short momentum strategy would claim bankruptcy. Tail risk matters to investors and it should matter in empirical research.

Gray and Vogel examine maximum drawdowns for eleven long/short academic anomalies:

When looking at the worst drawdown in the history of the long/short return series, we find that 6 of the 11 strategies have maximum drawdowns of more than 50%. The Oscore, Momentum, and Return on Assets, endure maximum drawdowns of 83.50%, 86.05% and 84.71%, respectively! These losses would trigger immediate margin calls and liquidations from brokers. We do find that Net Stock Issuance and Composite Issuance limit maximum drawdowns, with maximum drawdowns of 29.23% and 26.27%, respectively. If a fund employed minimal leverage, a fund implementing these strategies would likely survive a broker liquidation scenario.

In addition to broker margin calls and liquidations, investment managers face liquidation threats from their investors. Liquidations occur for two primary reasons: there are information asymmetries between investors and investment managers, and 2)investors rely on past performance to ascertain expected future performance (Shleifer and Vishny (1997)). To understand the potential threat of liquidation from outside investors, we examine the performance of the S&P 500 during the maximum drawdown period and the twelve month drawdown period for each of our respective academic anomalies. In 9/11 cases, the S&P 500 has exceptional performance during the largest loss scenarios for the value-weighted long/short strategies. In the case of the Net Stock Issuance and the Composite Issuance anomaly—the long/short strategies with the most reasonable drawdowns—the S&P 500 returns 56.40% and 49.46% during the respective drawdown periods. One can conjecture that investors would find it difficult to maintain discipline to a long/short strategy when they are underperforming a broad equity index by over 75%. Stories about the benefits of “uncorrelated alpha” can only go so far.

Gray and Vogel find that maximum drawdown events are often followed by exceptional performance for the strategy examined:

One prediction from this story is that returns to long/short anomalies are high following terrible performance. We test this prediction in Table 5. We examine the returns on the 11 academic anomalies following their maximum drawdown event. We compute three-, six-, and twelve-month compound returns to the long/short strategies immediately following the worst drawdown. The evidence suggests that performance following a maximum drawdown event is exceptional. All the anomalies experience strong positive returns over three-, six-, and twelve-month periods following the drawdown event.

Read Using Maximum Drawdowns to Capture Tail Risk.

Read Full Post »

Last May in How to beat The Little Book That Beats The Market: An analysis of the Magic Formula I took a look at Joel Greenblatt’s Magic Formula, which he introduced in the 2006 book The Little Book That Beats The Market (now updated to The Little Book That (Still) Beats the Market).

Wes and I put the Magic Formula under the microscope in our book Quantitative Value. We are huge fans of Greenblatt and the Magic Formula, writing in the book that Greenblatt is Benjamin Graham’s “heir in the application of systematic methods to value investment”.

The Magic Formula follows the same broad principles as the simple Graham model that I discussed a few weeks back in Examining Benjamin Graham’s Record: Skill Or Luck?. The Magic Formula diverges from Graham’s strategy by exchanging for Graham’s absolute price and quality measures (i.e. price-to-earnings ratio below 10, and debt-to-equity ratio below 50 percent) a ranking system that seeks those stocks with the best combination of price and quality more akin to Buffett’s value investing philosophy.

The Magic Formula was born of an experiment Greenblatt conducted in 2002. He wanted to know if Warren Buffett’s investment strategy could be quantified. Greenblatt read Buffett’s public pronouncements, most of which are contained in his investment vehicle Berkshire Hathaway, Inc.’s Chairman’s Letters. Buffett has written to the shareholders of Berkshire Hathaway every year since 1978, after he first took control of the company, laying out his investment strategy in some detail. Those letters describe the rationale for Buffett’s dictum, “It’s far better to buy a wonderful company at a fair price than a fair company at a wonderful price.” Greenblatt understood that Buffett’s “wonderful-company-at-a-fair-price” strategy required Buffett’s delicate qualitative judgment. Still, he wondered what would happen if he mechanically bought shares in good businesses available at bargain prices. Greenblatt discovered the answer after he tested the strategy: mechanical Buffett made a lot of money.

Wes and I tested the strategy and outlined the results in Quantitative Value. We found that Greenblatt’s Magic Formula has consistently outperformed the market, and with lower relative risk than the market. Naturally, having found something not broke, we set out to fix it, and wondered if we could improve on the Magic Formula’s outstanding performance. Are there other simple, logical strategies that can do better? Tune in soon for Part 2.

Read Full Post »

Abnormal Returns’ Tadas Viskanta has posted a great interview with my Quantitative Value co-author, the Turnkey Analyst Wes Gray:

AR: You write in the book that there are two arguments for value investing: “logical and empirical.” It seems like the value investing community heavily emphasizes the former as opposed to the latter. Why do you think that is?

WG: Human beings tend to favor good stories over evidence, but this can lead to problems. As Mark Twain says, “All you need is ignorance and confidence and the success is sure.”

This tendency to embrace stories might help explain why being “logical” is more heavily relied upon by investors, – good logic makes as good story. Relying on the evidence, or being “empirical,” is under appreciated because it is sometimes counterintuitive.  I’m actually a big fan of a logical story backed by empirical data. This is the essence of our book Quantitative Value. We present a compelling story on the value investment philosophy, but at each step along our journey we pepper our analysis with empirical analysis and academic rigor.

AR: You note in the book the importance of Ben Graham and how a continued application of his “simple value strategy” would still generate profits today. Have you seen the recent video about him? He seems to have been as interesting a guy as he was investor/teacher.

WG: As Toby and I conducted background research for the book, we became more and more convinced that Ben Graham was the original systematic value investor. In Quantitative Value we backtest a strategy Graham suggested in the 1976 Medical Economics Journal titled “The Simplest Way to Select Bargain Stocks.” We show that Graham’s strategy performed just as well over the past 40 years as it did in the 50 years prior to 1976. This is a remarkable “out-of-sample” test and highlights the robustness of a systematic value investment approach.

With respect to your question on the video: the recent video circulating the web reinforces our belief that Graham was an empiricist by nature and relied heavily on the scientific method to make his decisions. I also find it interesting that his discussions are so focused on the fallibility of human decision-making ability. Many of the ideas and concepts Graham mentioned regarding human behavior have been backed by behavioral finance studies written the past 20 years. He was well ahead of his time.

AR: The value community loves to continue to claim Warren Buffett as a disciple. However today he would be best described as a “quality and price” investor more than anything. What is the relevance of how Warren Buffett’s approach to investing has changed over time?

WG: The irony here is that, on average, Warren Buffett’s “new” approach to value investing is inferior to the approach originally described by Ben Graham. Buffett describes an approach that is broader in perspective and allows an investor to move along the cheapness axis to capture high quality firms. Graham, who studied the actual data, was much more focused on absolute cheapness. This concept is highlighted in many of his recommended investment approaches, where the foundation of the strategy prescribed is to simply purchase stocks under a specific price point (e.g., P/E <10).

After studying data from the post-Graham era, we have come to the same conclusion as Graham: cheapness is everything; quality is a nice-to-have. For example, the risk-adjusted returns on the higher-priced, but very high quality firms (i.e., Buffett firms) are much worse on a risk-adjusted basis than the returns on a basket of the cheapest firms that are of extreme low quality (i.e., Graham cigar butts). In the end, if you aren’t exclusively digging in the bargain bin, you’re missing out on potential outperformance.

Read the rest of the interview here. As Tadas says, the answers are illuminating.

For more on Quantitative Value, read my overview here.

Read Full Post »

Last week I wrote about the performance of one of Benjamin Graham’s simple quantitative strategies over the 37 years he since he described it (Examining Benjamin Graham’s Record: Skill Or Luck?). In the original article Graham proposed two broad approaches, the second of which we examine in Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors. The first approach Graham detailed in the original 1934 edition of Security Analysis (my favorite edition)—“net current asset value”:

My first, more limited, technique confines itself to the purchase of common stocks at less than their working-capital value, or net-current asset value, giving no weight to the plant and other fixed assets, and deducting all liabilities in full from the current assets. We used this approach extensively in managing investment funds, and over a 30-odd year period we must have earned an average of some 20 per cent per year from this source. For a while, however, after the mid-1950’s, this brand of buying opportunity became very scarce because of the pervasive bull market. But it has returned in quantity since the 1973–74 decline. In January 1976 we counted over 300 such issues in the Standard & Poor’s Stock Guide—about 10 per cent of the total. I consider it a foolproof method of systematic investment—once again, not on the basis of individual results but in terms of the expectable group outcome.

In 2010 I examined the performance of Graham’s net current asset value strategy with Sunil Mohanty and Jeffrey Oxman of the University of St. Thomas. The resulting paper is embedded below:

While Graham found this strategy was “almost unfailingly dependable and satisfactory,” it was “severely limited in its application” because the stocks were too small and infrequently available. This is still the case today. There are several other problems with both of Graham’s strategies. In Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors Wes and I discuss in detail industry and academic research into a variety of improved fundamental value investing methods, and simple quantitative value investment strategies. We independently backtest each method, and strategy, and combine the best into a sample quantitative value investment model.

The book can be ordered from Wiley FinanceAmazon, or Barnes and Noble.

[I am an Amazon Affiliate and receive a small commission for the sale of any book purchased through this site.]

Read Full Post »

Two recent articles, Was Benjamin Graham Skillful or Lucky? (WSJ), and Ben Graham’s 60-Year-Old Strategy Still Winning Big (Forbes), have thrown the spotlight back on Benjamin Graham’s investment strategy and his record. In the context of Michael Mauboussin’s new book The Success Equation, Jason Zweig asks in his WSJ Total Return column whether Graham was lucky or skillful, noting that Graham admitted he had his fair share of luck:

We tend to think of the greatest investors – say, Peter Lynch, George Soros, John Templeton, Warren Buffett, Benjamin Graham – as being mostly or entirely skillful.

Graham, of course, was the founder of security analysis as a profession, Buffett’s professor and first boss, and the author of the classic book The Intelligent Investor. He is universally regarded as one of the best investors of the 20th century.

But Graham, who outperformed the stock market by an annual average of at least 2.5 percentage points for more than two decades, coyly admitted that much of his remarkable track record may have been due to luck.

John Reese, in his Forbes’ Intelligent Investing column, notes that Graham’s Defensive Investor strategy has continued to outpace the market over the last decade:

Known as the “Father of Value Investing”—and the mentor of Warren Buffett—Graham’s investment firm posted annualized returns of about 20% from 1936 to 1956, far outpacing the 12.2% average return for the broader market over that time.

But the success of Graham’s approach goes far beyond even that lengthy period. For nearly a decade, I have been tracking a portfolio of stocks picked using my Graham-inspired Guru Strategy, which is based on the “Defensive Investor” criteria that Graham laid out in his 1949 classic, The Intelligent Investor. And, since its inception, the portfolio has returned 224.3% (13.3% annualized) vs. 43.0% (3.9% annualized) for the S&P 500.

Even with all of the fiscal cliff and European debt drama in 2012, the Graham-based portfolio has had a particularly good year. While the S&P 500 has notched a solid 13.7% gain (all performance figures through Dec. 17), the Graham portfolio is up more than twice that, gaining 28.5%.

Reese’s experiment might suggest that Graham is more skillful than lucky.

In our recently released book, Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors, Wes and I examine one of Graham’s simple strategies in the period after he described it to the present day. Graham gave an interview to the Financial Analysts Journal in 1976, some 40 year after the publication of Security Analysis. He was asked whether he still selected stocks by carefully studying individual issues, and responded:

I am no longer an advocate of elaborate techniques of security analysis in order to find superior value opportunities. This was a rewarding activity, say, 40 years ago, when our textbook “Graham and Dodd” was first published; but the situation has changed a great deal since then. In the old days any well-trained security analyst could do a good professional job of selecting undervalued issues through detailed studies; but in the light of the enormous amount of research now being carried on, I doubt whether in most cases such extensive efforts will generate sufficiently superior selections to justify their cost. To that very limited extent I’m on the side of the “efficient market” school of thought now generally accepted by the professors.

Instead, Graham proposed a highly simplified approach that relied for its results on the performance of the portfolio as a whole rather than on the selection of individual issues. Graham believed that such an approach “[combined] the three virtues of sound logic, simplicity of application, and an extraordinarily good performance record.”

Graham said of his simplified value investment strategy:

What’s needed is, first, a definite rule for purchasing which indicates a priori that you’re acquiring stocks for less than they’re worth. Second, you have to operate with a large enough number of stocks to make the approach effective. And finally you need a very definite guideline for selling.

What did Graham believe was the simplest way to select value stocks? He recommended that an investor create a portfolio of a minimum of 30 stocks meeting specific price-to-earnings criteria (below 10) and specific debt-to-equity criteria (below 50 percent) to give the “best odds statistically,” and then hold those stocks until they had returned 50 percent, or, if a stock hadn’t met that return objective by the “end of the second calendar year from the time of purchase, sell it regardless of price.”

Graham said that his research suggested that this formula returned approximately 15 percent per year over the preceding 50 years. He cautioned, however, that an investor should not expect 15 percent every year. The minimum period of time to determine the likely performance of the strategy was five years.

Graham’s simple strategy sounds almost too good to be true. Sure, this approach worked in the 50 years prior to 1976, but how has it performed in the age of the personal computer and the Internet, where computing power is a commodity, and access to comprehensive financial information is as close as the browser? We decided to find out. Like Graham, Wes and I used a price-to-earnings ratio cutoff of 10, and we included only stocks with a debt-to-equity ratio of less than 50 percent. We also apply his trading rules, selling a stock if it returned 50 percent or had been held in the portfolio for two years.

Figure 1.2 below taken from our book shows the cumulative performance of Graham’s simple value strategy plotted against the performance of the S&P 500 for the period 1976 to 2011:

Graham Strategy

Amazingly, Graham’s simple value strategy has continued to outperform.

Table 1.2 presents the results from our study of the simple Graham value strategy:

Graham Chart

Graham’s strategy turns $100 invested on January 1, 1976, into $36,354 by December 31, 2011, which represents an average yearly compound rate of return of 17.80 percent—outperforming even Graham’s estimate of approximately 15 percent per year. This compares favorably with the performance of the S&P 500 over the same period, which would have turned $100 invested on January 1, 1976, into $4,351 by December 31, 2011, an average yearly compound rate of return of 11.05 percent. The performance of the Graham strategy is attended by very high volatility, 23.92 percent versus 15.40 percent for the total return on the S&P 500.

The evidence suggests that Graham’s simplified approach to value investment continues to outperform the market. I think it’s a reasonable argument for skill on the part of Graham.

It’s useful to consider why Graham’s simple strategy continues to outperform. At a superficial level, it’s clear that some proxy for price—like a P/E ratio below 10—combined with some proxy for quality—like a debt-to-equity ratio below 50 percent—is predictive of future returns. But is something else at work here that might provide us with a deeper understanding of the reasons for the strategy’s success? Is there some other reason for its outperformance beyond simple awareness of the strategy? We think so.

Graham’s simple value strategy has concrete rules that have been applied consistently in our study. Even through the years when the strategy underperformed the market  our study assumed that we continued to apply it, regardless of how discouraged or scared we might have felt had we actually used it during the periods when it underperformed the market. Is it possible that the very consistency of the strategy is an important reason for its success? We believe so. A value investment strategy might provide an edge, but some other element is required to fully exploit that advantage.

Warren Buffett and Charlie Munger believe that the missing ingredient is temperament. Says Buffett, “Success in investing doesn’t correlate with IQ once you’re above the level of 125. Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble in investing.”

Was Graham skillful or lucky? Yes. Does the fact that he was lucky detract from his extraordinary skill? No because he purposefully concentrated on the undervalued tranch of stocks that provide asymmetric outcomes: good luck in the fortunes of his holdings helped his portfolio disproportionately on the upside, and bad luck didn’t hurt his portfolio much on the downside. That, in my opinion, is strong evidence of skill.

Buy my book The Acquirer’s Multiple: How the Billionaire Contrarians of Deep Value Beat the Market from on Kindlepaperback, and Audible.

Here’s your book for the fall if you’re on global Wall Street. Tobias Carlisle has hit a home run deep over left field. It’s an incredibly smart, dense, 213 pages on how to not lose money in the market. It’s your Autumn smart read. –Tom Keene, Bloomberg’s Editor-At-Large, Bloomberg Surveillance, September 9, 2014.

Click here if you’d like to read more on The Acquirer’s Multiple, or connect with me on Twitter, LinkedIn or Facebook. Check out the best deep value stocks in the largest 1000 names for free on the deep value stock screener at The Acquirer’s Multiple®.


Read Full Post »

« Newer Posts - Older Posts »

%d bloggers like this: