Feeds:
Posts
Comments

Posts Tagged ‘James Montier’

From Montier’s most recent piece, Hyperinflations, Hysteria, and False Memories (.pdf) (via GMO):

In the past, I’ve admitted to macroeconomics being one of my dark, guilty pleasures. To some “value” investors this seems like heresy, as Marty Whitman¹ once wrote, “Graham and Dodd view macro factors . . . as crucial to the analysis of a corporate security. Value investors, however, believe that macro factors are irrelevant.” I am clearly a Graham and Doddite on this measure (and most others as well). I view understanding the macro backdrop (N.B. not predicting it, as Ben Graham said, “Analysis of the future should be penetrating rather than prophetic.”) as one of the core elements of risk management.

¹. Martin J. Whitman, Value Investing: A Balanced Approach, John Wiley & Sons, 1999.

Advertisement

Read Full Post »

Last week I looked at James Montier’s 2006 paper The Little Note That Beats The Market and his view that investors would struggle to implement the Magic Formula strategy for behavioral reasons, a view borne out by Greenblatt’s own research. This is not a criticism of the strategy, which is tractable and implementable, but an observation on how pernicious our cognitive biases are.

Greenblatt found that a compilation of all the “professionally managed” – read “systematic, automatic (hydromatic)” – accounts earned 84.1 percent over two years against the S&P 500 (up 62.7 percent). A compilation of “self-managed” accounts (the humans) over the same period showed a cumulative return of 59.4 percent, losing to the market by 20 percent, and to the machines by almost 25 percent. So the humans took this unmessupable system and messed it up. As predicted by Montier and Greenblatt.

Ugh.

Greenblatt, perhaps dismayed at the fact that he dragged the horses all the way to the water to find they still wouldn’t drink, has a new idea: value-weighted indexing (not to be confused with the academic term for market capitalization-weighting, which is, confusingly, also called value weighting).

I know from speaking to some of you that this is not a particularly popular idea, but I like it. Here’s Greenblatt’s rationale, paraphrased:

  • Most investors, pro’s included, can’t beat the index. Therefore, buying an index fund is better than messing it up yourself or getting an active manager to mess it up for you.
  • If you’re going to buy an index, you might as well buy the best one. An index based on the market capitalization-weighted S&P500 will be handily beaten by an equal-weighted index, which will be handily beaten by a fundamentally weighted index, which is in turn handily beaten by a “value-weighted index,” which is what Greenblatt calls his “Magic Formula-weighted index.”

I like the logic. I also think the data on the last point are persuasive. In chart form, the data on that last point look like this:

The value weighted index knocked out a CAGR of 16.1 percent per year over the last 20 years. Not bad.

Greenblatt explains his rationale in some depth in his latest book The Big Secret. The book has taken some heavy criticism on Amazon – average review is 3.2 out of 5 as of now – most of which I think is unwarranted (for example, “Like many others here, I do not exactly understand the reason for this book’s existence.”).

I’m going to take a close look at the value-weighted index this week.

Read Full Post »

Yesterday I looked at James Montier’s 2006 paper The Little Note That Beats The Market and his view that investors would struggle to implement the Magic Formula strategy for behavioral reasons.

The Magic Formula is a logical value strategy, it works in backtest, and, most importantly, it seems to work in practice, as this chart from Formula Investing attests:

As Montier predicted, Joel Greenblatt has found that investors do in fact struggle to implement in the Magic Formula strategy in practice. In a great piece published earlier this year, Adding Your Two Cents May Cost You A Lot Over The Long-Term, Greenblatt examined the first two years of returns to Formula Investing’s US separately managed accounts:

Formula Investing provides two choices for retail clients to invest in U.S. stocks, either through what we call a “self-managed” account or through a “professionally managed” account. A self-managed account allows clients to make a number of their own choices about which top ranked stocks to buy or sell and when to make these trades. Professionally managed accounts follow a systematic process that buys and sells top ranked stocks with trades scheduled at predetermined intervals. During the two year period under study[1], both account types chose from the same list of top ranked stocks based on the formulas described in The Little Book that Beats the Market.

Greenblatt has conducted a great real-time behavioral investing experiment. Self-managed accounts have discretion over buy and sell decisions, while professionally managed accounts are automated. Both choose from the same list of stocks. So what happened?

[The] self-managed accounts, where clients could choose their own stocks from the pre-approved list and then follow (or not) our guidelines for trading the stocks at fixed intervals didn’t do too badly. A compilation of all self-managed accounts for the two year period showed a cumulative return of 59.4% after all expenses. Pretty darn good, right? Unfortunately, the S&P 500 during the same period was actually up 62.7%.

“Hmmm….that’s interesting”, you say (or I’ll say it for you, it works either way), “so how did the ‘professionally managed’ accounts do during the same period?” Well, a compilation of all the “professionally managed” accounts earned 84.1% after all expenses over the same two years, beating the “self managed” by almost 25% (and the S&P by well over 20%). For just a two year period, that’s a huge difference! It’s especially huge since both “self-managed” and “professionally managed” chose investments from the same list of stocks and supposedly followed the same basic game plan.

Let’s put it another way: on average the people who “self-managed” their accounts took a winning system and used their judgment to unintentionally eliminate all the outperformance and then some!

Just as Montier (and Greenblatt) predicted, investors struggle to implement the Magic Formula. Discretion over buy-and-sell decisions in aggregate can turn a model that generates a market beating return into a sub-par return. Extraordinary!

Greenblatt has to be admired for sharing this research with the world. Value investing is as misunderstood in the investment community at large as quantitative value investing is misunderstood in the value investing community. It takes a great deal of courage to point out the flaws (such as they are) in the implementation of a strategy, particularly when they are not known to those outside his firm. Given that Greenblatt has a great deal of money riding on the Magic Formula, he should be commended for conducting and sharing a superb bit of research.

I love his conclusion:

[The] best performing “self-managed” account didn’t actually do anything. What I mean is that after the initial account was opened, the client bought stocks from the list and never touched them again for the entire two year period. That strategy of doing NOTHING outperformed all other “self-managed” accounts. I don’t know if that’s good news, but I like the message it appears to send—simply, when it comes to long-term investing, doing “less” is often “more”. Well, good work if you can get it, anyway.

Read Full Post »

In his 2006 paper, “The Little Note That Beats the Markets” James Montier backtested the Magic Formula and found that it supported the claim in the “Little Book That Beats The Market” that the Magic Formula does in fact beat the market:

The results certainly support the notions put forward in the Little Book. In all the regions, the Little Book strategy substantially outperformed the market, and with lower risk! The range of outperformance went from just over 3.5% in the US to an astounding 10% in Japan.

The results of our backtest suggest that Greenblatt’s strategy isn’t unique to the US. We tested the Little Book strategy on US, European, UK and Japanese markets between 1993 and 2005. The results are impressive. The Little Book strategy beat the market (an equally weighted stock index) by 3.6%, 8.8%, 7.3% and 10.8% in the various regions respectively. And in all cases with lower volatility than the market! The outperformance was even better against the cap weighted indices.

Regardless, Montier felt that investors would struggle to implement the strategy for behavioral reasons:

Greenblatt suggests two reasons why investors will struggle to follow the Little Book strategy. Both ring true with us from our meeting with investors over the years. The first is “investing by using a magic formula may take away some of the fun”. Following a quant model or even a set of rules takes a lot of the excitement out of stock investing. What would you do all day if you didn’t have to meet companies or sit down with the sell side?

As Keynes noted “The game of professional investment is intolerably boring and over- exacting to anyone who is entirely exempt from the gambling instinct; whilst he who has it must pay to this propensity the appropriate toll”.

Secondly, the Little Book strategy, and all value strategies for that matter, requires patience. And patience is in very short supply amongst investors in today’s markets. I’ve even come across fund managers whose performance is monitored on a daily basis – congratulations are to be extended to their management for their complete mastery of measuring noise! Everyone seems to want the holy grail of profits without any pain. Dream on. It doesn’t exist.

Value strategies work over the long run, but not necessarily in the short term. There can be prolonged periods of underperformance. It is these periods of underperformance that ensure that not everyone becomes a value investor (coupled with a hubristic belief in their own abilities to pick stocks).

As Greenblatt notes “Imagine diligently watching those stocks each day as they do worse than the market averages over the course of many months or even years… The magic formula portfolio fared poorly relative to the market average in 5 out of every 12 months tested. For full-year periods… failed to beat the market averages once every four years”.

The chart below shows the proportion of years within Montier’s sample where the Magic Formula failed to beat the market  in each of the respective regions.

Europe and the UK show surprisingly few years of historic market underperformance. Montier says investors should “bear in mind the lessons from the US and Japan, where underperformance has been seen on a considerably more frequent basis:”

It is this periodic underperformance that really helps ensure the survival of such strategies. As long as investors continue to be overconfident in their abilities to consistently pick winners, and myopic enough that even a year of underperformance is enough to send them running, then strategies such as the Little Book are likely to continue to do well over the long run. Thankfully for those of us with faith in such models, the traits just described seem to be immutable characteristics of most people. As Warren Buffet said “Investing is simple but not easy”.

Montier has long promoted the theme that the reason value investors underperform value models is due to behavioral errors and cognitive biases. For example, in his excellent  2006 research report Painting By Numbers: An Ode To Quant Montier attributes most of the underperformance to overconfidence:

We all think that we know better than simple models. The key to the quant model’s performance is that it has a known error rate while our error rates are unknown.

The most common response to these findings is to argue that surely a fund manager should be able to use quant as an input, with the flexibility to override the model when required. However, as mentioned above, the evidence suggests that quant models tend to act as a ceiling rather than a floor for our behaviour. Additionally there is plenty of evidence to suggest that we tend to overweight our own opinions and experiences against statistical evidence.

Greenblatt has conducted a study on exactly this point. More tomorrow.

Read Full Post »

I’m a huge fan of James Montier’s work on the rationale for a quantitative investment strategy and global Graham net net investing. Miguel Barbosa of Simoleon Sense has a wonderful interview with Montier, covering his views on behavioral investing and value investment. Particularly interesting is Montier’s concept of “seductive details” and the implications for investors:

Miguel: Let’s talk about the concept of seductive details…can you give us an example of how investors are trapped by irrelevant information?

James Montier: The sheer amount of irrelevant information faced by investors is truly staggering. Today we find ourselves captives of the information age, anything you could possibly need to know seems to appear at the touch of keypad. However, rarely, if ever, do we stop and ask ourselves exactly what we need to know in order to make a good decision.

Seductive details are the kind of information that seems important, but really isn’t. Let me give you an example. Today investors are surrounded by analysts who are experts in their fields. I once worked with an IT analyst who could take a PC apart in front of you, and tell you what every little bit did, fascinating stuff to be sure, but did it help make better investment decisions, clearly not. Did the analyst know anything at all about valuing a company or a stock, I’m afraid not. Yet he was immensely popular because he provided seductive details.

Montier’s “seductive details” is reminiscent of the discussion in Nicholas Taleb’s Fooled by Randomness on the relationship between the amount of information available to experts, the accuracy of judgments they make based on this information, and the experts’ confidence in the accuracy of these judgements. Intuition suggests that having more information should increase the accuracy of predictions about uncertain outcomes. In reality, more information decreases the accuracy of predictions while simultaneously increasing the confidence that the prediction is correct. One such example is given in the paper The illusion of knowledge: When more information reduces accuracy and increases confidence (.pdf) by Crystal C. Hall, Lynn Ariss, and Alexander Todorov. In that study, participants were asked to predict basketball games sampled from a National Basketball Association season:

All participants were provided with statistics (win record, halftime score), while half were additionally given the team names. Knowledge of names increased the confidence of basketball fans consistent with their belief that this knowledge improved their predictions. Contrary to this belief, it decreased the participants’ accuracy by reducing their reliance on statistical cues. One of the factors contributing to this underweighting of statistical cues was a bias to bet on more familiar teams against the statistical odds. Finally, in a real betting experiment, fans earned less money if they knew the team names while persisting in their belief that this knowledge improved their predictions.

This is not an isolated example. In Effects of amount of information on judgment accuracy and confidence, by Claire I. Tsai, Joshua Klayman, and Reid Hastie, the authors examined two other studies that further that demonstrate when decision makers receive more information, their confidence increases more than their accuracy, producing “substantial confidence–accuracy discrepancies.” The CIA have also examined the phenomenon. In Chapter 5 of Psychology of Intelligence Analysis, Do you really need more information?, the author argues against “the often-implicit assumption that lack of information is the principal obstacle to accurate intelligence judgments:”

Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence.

Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do.

Click here to see the Simoleon Sense interview.

Read Full Post »

In his 2006 research report Painting By Numbers: An Ode To Quant (via The Hedge Fund Journal) James Montier presents a compelling argument for a quantitative approach to investing. Montier’s thesis is that simple statistical or quantitative models consistently outperform expert judgements. This phenomenon continues even when the experts are provided with the models’ predictions. Montier argues that the models outperform because humans are overconfident, biased, and unable or unwilling to change.

Montier makes his argument via a series of examples drawn from fields other than investment. The first example he gives, which he describes as a “classic in the field” and which succinctly demonstrates the two important elements of his thesis, is the diagnosis of patients as either neurotic or psychotic. The distinction is as follows: a psychotic patient “has lost touch with the external world” whereas a neurotic patient “is in touch with the external world but suffering from internal emotional distress, which may be immobilising.” According to Montier, the standard test to distinguish between neurosis or psychosis is the Minnesota Multiphasic Personality Inventory or MMPI:

In 1968, Lewis Goldberg1 obtained access to more than 1000 patients’ MMPI test responses and final diagnoses as neurotic or psychotic. He developed a simple statistical formula, based on 10 MMPI scores, to predict the final diagnosis. His model was roughly 70% accurate when applied out of sample. Goldberg then gave MMPI scores to experienced and inexperienced clinical psychologists and asked them to diagnose the patient. As Fig.1 shows, the simple quant rule significantly outperformed even the best of the psychologists.

Even when the results of the rules’ predictions were made available to the psychologists, they still underperformed the model. This is a very important point: much as we all like to think we can add something to the quant model output, the truth is that very often quant models represent a ceiling in performance (from which we detract) rather than a floor (to which we can add).

The MMPI example illustrates the two important points of Montier’s thesis:

  1. The simple statistical model outperforms the judgements of the best experts.
  2. The simple statistical model outperforms the judgements of the best experts, even when those experts are given access to the simple statistical model.

Montier goes on to give diverse examples of the application of his theory, ranging from the detection of brain damage, the interview process to admit students to university, the likelihood of a criminal to re-offend, the selection of “good” and “bad” vintages of Bordeaux wine, and the buying decisions of purchasing managers. He then discusses some “meta-analysis” of studies to demonstrate that “the range of evidence I’ve presented here is not somehow a biased selection designed to prove my point:”

Grove et al consider an impressive 136 studies of simple quant models versus human judgements. The range of studies covered areas as diverse as criminal recidivism to occupational choice, diagnosis of heart attacks to academic performance. Across these studies 64 clearly favoured the model, 64 showed approximately the same result between the model and human judgement, and a mere 8 studies found in favour of human judgements. All of these eight shared one trait in common; the humans had more information than the quant models. If the quant models had the same information it is highly likely they would have outperformed.

As Paul Meehl (one of the founding fathers of the importance of quant models versus human judgements) wrote: There is no controversy in social science which shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one… predicting everything from the outcomes of football games to the diagnosis of liver disease and when you can hardly come up with a half a dozen studies showing even a weak tendencyin favour of the clinician, it is time to draw a practical conclusion.

Why not investing?

Montier says that, within the world of investing, the quantitative approach is “far from common,” and, where it does exist, the practitioners tend to be “rocket scientist uber-geeks,” the implication being that they would not employ a simple model. So why isn’t quantitative investing more common? According to Montier, the “most likely answer is overconfidence.”

We all think that we know better than simple models. The key to the quant model’s performance is that it has a known error rate while our error rates are unknown.

The most common response to these findings is to argue that surely a fund manager should be able to use quant as an input, with the flexibility to override the model when required. However, as mentioned above, the evidence suggests that quant models tend to act as a ceiling rather than a floor for our behaviour. Additionally there is plenty of evidence to suggest that we tend to overweight our own opinions and experiences against statistical evidence.

Montier provides the following example is support of his contention that we tend to prefer our own views to statistical evidence:

For instance, Yaniv and Kleinberger11 have a clever experiment based on general knowledge questions such as: In which year were the Dead Sea scrolls discovered?

Participants are asked to give a point estimate and a 95% confidence interval. Having done this they are then presented with an advisor’s suggested answer, and asked for their final best estimate and rate of estimates. Fig.7 shows the average mean absolute error in years for the original answer and the final answer. The final answer is more accurate than the initial guess.

The most logical way of combining your view with that of the advisor is to give equal weight to each answer. However, participants were not doing this (they would have been even more accurate if they had done so). Instead they were putting a 71% weight on their own answer. In over half the trials the weight on their own view was actually 90-100%! This represents egocentric discounting – the weighing of one’s own opinions as much more important than another’s view.

Similarly, Simonsohn et al12 showed that in a series of experiments direct experience is frequently much more heavily weighted than general experience, even if the information is equally relevant and objective. They note, “If people use their direct experience to assess the likelihood of events, they are likely to overweight the importance of unlikely events that have occurred to them, and to underestimate the importance of those that have not”. In fact, in one of their experiments, Simonsohn et al found that personal experience was weighted twice as heavily as vicarious experience! This is an uncannily close estimate to that obtained by Yaniv and Kleinberger in an entirely different setting.

It is worth noting that Montier identifies LSV Asset Management and Fuller & Thaler Asset Management as being “fairly normal” quantitative funds (as opposed to being “rocket scientist uber-geeks”) with “admirable track records in terms of outperformance.” You might recognize the names: “LSV” stands for Lakonishok, Shleifer, and Vishny, authors of the landmark Contrarian Investment, Extrapolation and Risk paper, and the “Thaler” in Fuller & Thaler is Richard H. Thaler, co-author of Further Evidence on Investor Overreaction and Stock Market Seasonality, both papers I’m wont to cite. I’m not entirely sure what strategies LSV and Fuller & Thaler pursue, wrapped as they are in the cloaks of “behavioural finance,” but judging from those two papers, I’d say it’s a fair bet that they are both pursuing value-based strategies.

It might be a while before we see a purely quantitative value fund, or at least a fund that acknowledges that it is one. As Montier notes:

We find it ‘easy’ to understand the idea of analysts searching for value, and fund managers rooting out hidden opportunities. However, selling a quant model will be much harder. The term ‘black box’ will be bandied around in a highly pejorative way. Consultants may question why they are employing you at all, if ‘all’ you do is turn up and run the model and then walk away again.

It is for reasons like these that quant investing is likely to remain a fringe activity, no matter how successful it may be.

Montier’s now at GMO, and has produced a new research report called Ten Lessons (Not?) Learnt (via Trader’s Narrative).

Read Full Post »

%d bloggers like this: