Feeds:
Posts
Comments

Archive for the ‘Behavioral economics’ Category

Recently I’ve been discussing Michael Mauboussin’s December 2007 Mauboussin on Strategy, “Death, Taxes, and Reversion to the Mean; ROIC Patterns: Luck, Persistence, and What to Do About It,” (.pdf) about Mauboussin’s research on the tendency of return on invested capital (ROIC) to revert to the mean (See Part 1 and Part 2).

Mauboussin’s report has significant implications for modelling in general, and also several insights that are particularly useful to Graham net net investors. These implications are as follows:

  • Models are often too optimistic and don’t take into account the “large and robust reference class” about ROIC performance. Mauboussin says:

We know a small subset of companies generate persistently attractive ROICs—levels that cannot be attributed solely to chance—but we are not clear about the underlying causal factors. Our sense is most models assume financial performance that is unduly favorable given the forces of chance and competition.

  • Models often contain errors due to “hidden assumptions.” Mauboussin has identified errors in two distinct areas:

First, analysts frequently project growth, driven by sales and operating profit margins, independent of the investment needs necessary to support that growth. As a result, both incremental and aggregate ROICs are too high. A simple way to check for this error is to add an ROIC line to the model. An appreciation of the degree of serial correlations in ROICs provides perspective on how much ROICs are likely to improve or deteriorate.

The second error is with the continuing, or terminal, value in a discounted cash flow (DCF) model. The continuing value component of a DCF captures the firm’s value for the time beyond the explicit forecast period. Common estimates for continuing value include multiples (often of earnings before interest, taxes, depreciation, and amortization—EBITDA) and growth in perpetuity. In both cases, unpacking the underlying assumptions shows impossibly high future ROICs. 23

  • Models often underestimate the difficulty in sustaining high growth and returns. Few companies sustain rapid growth rates, and predicting which companies will succeed in doing so is very challenging:

Exhibit 12 illustrates this point. The distribution on the left is the actual 10-year sales growth rate for a large sample of companies with base year revenues of $500 million, which has a mean of about six percent. The distribution on the right is the three-year earnings forecast, which has a 13 percent mean and no negative growth rates. While earnings growth does tend to exceed sales growth by a modest amount over time, these expected growth rates are vastly higher than what is likely to appear. Further, as we saw earlier, there is greater persistence in sales growth rates than in earnings growth rates.

  • Models should be constructed “probabilistically.”

One powerful benefit to the outside view is guidance on how to think about probabilities. The data in Exhibit 5 offer an excellent starting point by showing where companies in each of the ROIC quintiles end up. At the extremes, for instance, we can see it is rare for really bad companies to become really good, or for great companies to plunge to the depths, over a decade.

For me, the following Exhibit is the most important chart of the entire paper. It’s Mauboussin’s visualization of the probabilities. He writes:

Assume you randomly draw a company from the highest ROIC quintile in 1997, where the median ROIC less cost of capital spread is in excess of 20 percent. Where will that company end up in a decade? Exhibit 13 shows the picture: while a handful of companies earn higher economic profit spreads in the future, the center of the distribution shifts closer to zero spreads, with a small group slipping to negative.

  • Crucial for net net investors is the need to understand the chances of a turnaround. Mauboussin says the chances are extremely low:

Investors often perceive companies generating subpar ROICs as attractive because of the prospects for unpriced improvements. The challenge to this strategy comes on two fronts. First, research shows low-performing companies get higher premiums than average-performing companies, suggesting the market anticipates change for the better. 24 Second, companies don’t often sustain recoveries.

Defining a sustained recovery as three years of above-cost-of-capital returns following two years of below-cost returns, Credit Suisse research found that only about 30 percent of the sample population was able to engineer a recovery. Roughly one-quarter of the companies produced a non-sustained recovery, and the balance—just under half of the population—either saw no turnaround or disappeared. Exhibit 14 shows these results for nearly 1,200 companies in the technology and retail sectors.


Mauboussin concludes with the important point that the objective of active investors is to “find mispriced securities or situations where the expectations implied by the stock price don’t accurately reflect the fundamental outlook:”

A company with great fundamental performance may earn a market rate of return if the stock price already reflects the fundamentals. You don’t get paid for picking winners; you get paid for unearthing mispricings. Failure to distinguish between fundamentals and expectations is common in the investment business.

Read Full Post »

Oh dear (Daily Reckoning via Guru Focus):

04/21/10 Gaithersburg, Maryland – Ken Heebner’s CGM Focus Fund was the best US stock fund of the past decade. It rose 18% a year, beating its nearest rival by more than three percentage points. Yet according to research by Morningstar, the typical investor in the fund lost 11% annually! How can that happen?

It happened because investors tended to take money out after a bad stretch and put it back in after a strong run. They sold low and bought high. Stories like this blow me away. Incredibly, these investors owned the best fund you could own over the last 10 years – and still managed to lose money.

Psychologically, it’s hard to do the right thing in investing, which often requires you to buy what has not done well of late so that you will do well in the future. We’re hard-wired to do the opposite.
I recently read James Montier’s Value Investing: Tools and Techniques for Intelligent Investment. It’s a meaty book that compiles a lot of research. Much of it shows how we are our own worst enemy.

One of my favorite chapters is called “Confused Contrarians and Dark Days for Deep Value.” Put simply, the main idea is that you can’t expect to outperform as an investor allthe time. In fact, the best investors often underperform over short periods of time. Montier cites research by the Brandes Institute that shows how, in any three-year period, the best investors find themselves among the worst performers about 40% of the time!

See the rest of the article here.

Read Full Post »

In the Introduction to my 2003 copy of Philip A. Fisher’s Common Stocks and Uncommon Profits and Other Writings, his son, Kenneth L. Fisher, recounts a story about his father that has stuck with me since I first read it. For me, it speaks to Phil Fisher’s eclectic genius, and quirky sense of humor:

But one night in the early 1970’s, we were together in Monterey at one of the first elaborate dog-and-pony shows for technology stocks – then known as “The Monterey Conference” – put on by the American Electronics Association. At the Monterey Conference, Father exhibited another quality I never forgot. The conference announced a dinner contest. There was a card at each place setting, and each person was to write down what he or she thought the Dow Jones Industrials would do the next day, which is, of course, a silly exercise. The cards were collected. The person who came closest to the Dow’s change for the day would win a mini-color TV (which were hot new items then). The winner would be announced at lunch the next day, right after the market closed at one o’clock (Pacific time). Most folks, it turned out, did what I did – wrote down some small number, like down or up 5.57 points. I did that assuming that the market was unlikely to do anything particularly spectacular because most days it doesn’t. Now in those days, the Dow was at about 900, so 5 points was neither huge nor tiny. That night, back at the hotel room, I asked Father what he put down; and he said, “Up 30 points,” which would be more than 3 percent. I asked why. he said he had no idea at all what the market would do; and if you knew him, you knew that he never had a view of what the market would do on a given day. But he said that if he put down a number like I did and won, people would think he was just lucky – that winning at 5.57 meant beating out the guy that put down 5.5 or the other guy at 6.0. It would all be transparently seen as sheer luck. But if he won saying, “up 30 points,” people would think he knew something and was not just lucky. If he lost, which he was probable and he expected to, no one would know what number he had written down, and it would cost him nothing. Sure enough, the next day, the Dow was up 26 points, and Father won by 10 points.

When it was announced at lunch that Phil Fisher had won and how high his number was, there were discernable “Ooh” and “Ahhhh” sounds all over the few-hundred-person crowd. There was, of course, the news of the day, which attempted to explain the move; and for the rest of the conference, Father readily explained to people a rationale for why he had figured out all that news in advance, which was pure fiction and nothing but false showmanship. But I listened pretty carefully, and everyone he told all that to swallowed it hook, line, and sinker. Although he was socially ill at ease always, and insecure, I learned that day that my father was a much better showman than I had ever fathomed. And, oh, he didn’t want the mini-TV because he had no use at all for change in his personal life. So he gave it to me and I took it home and gave it to mother, and she used it for a very long time.

Common Stocks and Uncommon Profits and Other Writings is, of course, required reading for all value investors. I believe the Introduction to the 2003 edition, written by Kenneth Fisher, should also be regarded as required reading. There Kenneth [Edit:, an investment superstar in his own right,] shares intimate details about Phil from the perspective of a son working with the father. As the vignette above demonstrates, Phil understood human nature, but was socially awkward; he understood the folly of the narrative, but was prepared to provide a colorful one when it suited him; and he understood positively skewed risk:reward bets in all aspects of his life, and had the courage to take them, even if it meant standing apart from the crowd. What is most striking about this sketch of Phil Fisher is that it could just as easily be a discussion of Mike Burry or Warren Buffett. Perhaps great investors are like Leo Tolstoy’s happy families:

Happy families are all alike; every unhappy family is unhappy in its own way.

Read Full Post »

One of the most interesting ideas suggested by Ian Ayers’s book Super Crunchers is the role of humans in the implementation of a quantitative investment strategy. As we know from Andrew McAfee’s Harvard Business Review blog post, The Future of Decision Making: Less Intuition, More Evidence, and James Montier’s 2006 research report, Painting By Numbers: An Ode To Quant, in context after context, simple statistical models outperform expert judgements. Further, decision makers who, when provided with the output of the simple statistical model, wave off the model’s predictions tend to make poorer decisions than the model. The reason? We are overconfident in our abilities. We tend to think that restraints are useful for the other guy but not for us. Ayres provides a great example in his article,  How computers routed the experts:

To cede complete decision-making power to lock up a human to a statistical algorithm is in many ways unthinkable.

The problem is that discretionary escape hatches have costs too. In 1961, the Mercury astronauts insisted on a literal escape hatch. They balked at the idea of being bolted inside a capsule that could only be opened from the outside. They demanded discretion. However, it was discretion that gave Liberty Bell 7 astronaut Gus Grissom the opportunity to panic upon splashdown. In Tom Wolfe’s memorable account, The Right Stuff, Grissom “screwed the pooch” when he prematurely blew the 70 explosive bolts securing the hatch before the Navy SEALs were able to secure floats. The space capsule sank and Grissom nearly drowned.

The natural question, then, is, “If humans can’t even be trusted with a small amount of discretion, what role do they play in the quantitative investment scenario?”

What does all this mean for human endeavour? If we care about getting the best decisions overall, there are many contexts where we need to relegate experts to supporting roles in the decision-making process. We, like the Mercury astronauts, probably can’t tolerate a system that forgoes any possibility of human override, but at a minimum, we should keep track of how experts fare when they wave off the suggestions of the formulas. And we should try to limit our own discretion to places where we do better than machines.

This is in many ways a depressing story for the role of flesh-and-blood people in making decisions. It looks like a world where human discretion is sharply constrained, where humans and their decisions are controlled by the output of machines. What, if anything, in the process of prediction can we humans do better than the machines?

The answer is that we formulate the factors to be tested. We hypothesise. We dream.

The most important thing left to humans is to use our minds and our intuition to guess at what variables should and should not be included in statistical analysis. A statistical regression can tell us the weights to place upon various factors (and simultaneously tell us how, precisely, it was able to estimate these weights). Humans, however, are crucially needed to generate the hypotheses about what causes what. The regressions can test whether there is a causal effect and estimate the size of the causal impact, but somebody (some body, some human) needs to specify the test itself.

So the machines still need us. Humans are crucial not only in deciding what to test, but also in collecting and, at times, creating the data. Radiologists provide important assessments of tissue anomalies that are then plugged into the statistical formulas. The same goes for parole officials who judge subjectively the rehabilitative success of particular inmates. In the new world of database decision-making, these assessments are merely inputs for a formula, and it is statistics – and not experts – that determine how much weight is placed on the assessments.

In investment terms, this means honing the strategy. LSV Asset Management, described by James Montier as being a “fairly normal” quantitative fund (as opposed to being “rocket scientist uber-geeks”) and authors of the landmark Contrarian Investment, Extrapolation and Risk paper, describe the ongoing role of the humans in its funds as follows (emphasis mine):

A proprietary investment model is used to rank a universe of stocks based on a variety of factors we believe to be predictive of future stock returns. The process is continuously refined and enhanced by our investment team although the basic philosophy has never changed – a combination of value and momentum factors.

The blasphemy about momentum aside, the refinement and enhancement process sounds like fun to me.

Read Full Post »

Aswath Damodaran, a Professor of Finance at the Stern School of Business, has an interesting post on his blog Musings on Markets, Transaction costs and beating the market. Damodaran’s thesis is that transaction costs – broadly defined to include brokerage commissions, spread and the “price impact” of trading (which I believe is an important issue for some strategies) – foil in the real world investment strategies that beat the market in back-tests. He argues that transaction costs are also the reason why the “average active portfolio manager” underperforms the index by about 1% to 1.5%. I agree with Damodaran. The long-term, successful practical application of any investment strategy is difficult, and is made more so by all of the frictional costs that the investor encounters. That said, I see no reason why a systematic application of some value-based investment strategies should not outperform the market even after taking into account those transaction costs and taxes. That’s a bold statement, and requires in support the production of equally extraordinary evidence, which I do not possess. Regardless, here’s my take on Damodaran’s article.

First, Damodaran makes the point that even well-researched, back-tested, market-beating strategies underperform in practice:

Most of these beat-the-market approaches, and especially the well researched ones, are backed up by evidence from back testing, where the approach is tried on historical data and found to deliver “excess returns”. Ergo, a money making strategy is born.. books are written.. mutual funds are created.

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index. That number has remained surprisingly stable over the last four decades and has persisted through bull and bear markets. Worse, this under performance cannot be attributed to “bad” portfolio mangers who drag the average down, since there is very little consistency in performance. Winners this year are just as likely to be losers next year…

Then he explains why he believes market-beating strategies that work on paper fail in the real world. The answer? Transaction costs:

So, why do portfolios that perform so well in back testing not deliver results in real time? The biggest culprit, in my view, is transactions costs, defined to include not only the commission and brokerage costs but two more significant costs – the spread between the bid price and the ask price and the price impact you have when you trade. The strategies that seem to do best on paper also expose you the most to these costs. Consider one simple example: Stocks that have lost the most of the previous year seem to generate much better returns over the following five years than stocks have done the best. This “loser” stock strategy was first listed in the academic literature in the mid-1980s and greeted as vindication by contrarians. Later analysis showed, though, that almost all of the excess returns from this strategy come from stocks that have dropped to below a dollar (the biggest losing stocks are often susceptible to this problem). The bid-ask spread on these stocks, as a percentage of the stock price, is huge (20-25%) and the illiquidity can also cause large price changes on trading – you push the price up as you buy and the price down as you sell. Removing these stocks from your portfolio eliminated almost all of the excess returns.

In support of his thesis, Damodaran gives the example of Value Line and its mutual funds:

In perhaps the most telling example of slips between the cup and lip, Value Line, the data and investment services firm, got great press when Fischer Black, noted academic and believer in efficient markets, did a study where he indicated that buying stocks ranked 1 in the Value Line timeliness indicator would beat the market. Value Line, believing its own hype, decided to start mutual funds that would invest in its best ranking stocks. During the years that the funds have been in existence, the actual funds have underperformed the Value Line hypothetical fund (which is what it uses for its graphs) significantly.

Damodaran’s argument is particularly interesting to me in the context of my recent series of posts on quantitative value investing. For those new to the site, my argument is that a systematic application of the deep value methodologies like Benjamin Graham’s liquidation strategy (for example, as applied in Oppenheimer’s Ben Graham’s Net Current Asset Values: A Performance Update) or a low price-to-book strategy (as described in Lakonishok, Shleifer, and Vishny’s Contrarian Investment, Extrapolation and Risk) can lead to exceptional long-term investment returns in a fund.

When Damodaran refers to “the price impact you have when you trade” he highlights a very important reason why a strategy in practice will underperform its theoretical results. As I noted in my conclusion to Intuition and the quantitative value investor:

The challenge is making the sample mean (the portfolio return) match the population mean (the screen). As we will see, the real world application of the quantitative approach is not as straight-forward as we might initially expect because the act of buying (selling) interferes with the model.

A strategy in practice will underperform its theoretical results for two reasons:

  1. The strategy in back test doesn’t have to deal with what I call the “friction” it encounters in the real world. I define “friction” as brokerage, spread and tax, all of which take a mighty bite out of performance. These are two of Damodaran’s transaction costs and another – tax. Arguably spread is the most difficult to prospectively factor into a model. One can account for brokerage and tax in the model, but spread is always going to be unknowable before the event.
  2. The act of buying or selling interferes with the market (I think it’s a Schrodinger’s cat-like paradox, but then I don’t understand quantum superpositions). This is best illustrated at the micro end of the market. Those of us who traffic in the Graham sub-liquidation value boat trash learn to live with wide spreads and a lack of liquidity. We use limit orders and sit on the bid (ask) until we get filled. No-one is buying (selling) “at the market,” because, for the most part, there ain’t no market until we get on the bid (ask). When we do manage to consummate a transaction, we’re affecting the price. We’re doing our little part to return it to its underlying value, such is the wonderful phenomenon of value investing mean reversion in action. The back-test / paper-traded strategy doesn’t have to account for the effect its own buying or selling has on the market, and so should perform better in theory than it does in practice.

If ever the real-world application of an investment strategy should underperform its theoretical results, Graham liquidation value is where I would expect it to happen. The wide spreads and lack of liquidity mean that even a small, individual investor will likely underperform the back-test results. Note, however, that it does not necessarily follow that the Graham liquidation value strategy will underperform the market, just the model. I continue to believe that a systematic application of Graham’s strategy will beat the market in practice.

I have one small quibble with Damodaran’s otherwise well-argued piece. He writes:

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index.

There’s a little rhetorical sleight of hand in this statement (which I’m guilty of on occasion in my haste to get a post finished). Evidence that the “average active portfolio manager” does not beat the market is not evidence that these strategies don’t beat the market in practice. I’d argue that the “average active portfolio manager” is not using these strategies. I don’t really know what they’re doing, but I’d guess the institutional imperative calls for them to hug the index and over- or under-weight particular industries, sectors or companies on the basis of a story (“Green is the new black,” “China will consume us back to the boom,” “house prices never go down,” “the new dot com economy will destroy the old bricks-and-mortar economy” etc). Yes, most portfolio managers underperform the index in the order of 1% to 1.5%, but I think they do so because they are, in essence, buying the index and extracting from the index’s performance their own fees and other transaction costs. They are not using the various strategies identified in the academic or popular literature. That small point aside, I think the remainder of the article is excellent.

In conclusion, I agree with Damodaran’s thesis that transaction costs in the form of brokerage commissions, spread and the “price impact” of trading make many apparently successful back-tested strategies unusable in the real world. I believe that the results of any strategy’s application in practice will underperform its theoretical results because of friction and the paradox of Schrodinger’s cat’s brokerage account. That said, I still see no reason why a systematic application of Graham’s liquidation value strategy or LSV’s low price-to-book value strategy can’t outperform the market even after taking into account these frictional costs and, in particular, wide spreads.

Hat tip to the Ox.

Read Full Post »

Speculating about the level of the market is a pastime for fools and knaves, as I have amply demonstrated in the past (or, as Edgar Allen Poe would have it, “I have great faith in fools — self-confidence my friends will call it.”). In April last year I ran a post, Three ghosts of bear markets past, on DShort.com’s series of charts showing how the current bear market compared to three other bear markets: the Dow Crash of 1929 (1929-1932), the Oil Crisis (1973-1974) and the Tech Wreck (2000-2002). At that time the market was up 24.4% from its low, and I said,

Anyone who thinks that the bounce means that the current bear market is over would do well to study the behavior of bear markets past (quite aside from simply looking at the plethora of data about the economy in general, the cyclical nature of long-run corporate earnings and price-earnings multiples over the same cycle). They might find it a sobering experience.

Now the market is up almost 60% from its low, which just goes to show what little I know:

While none of us are actually investing with regard to the level of the market – we’re all analyzing individual securities – I still find it interesting to see how the present aggregate experience compares to the experience in other epochs in investing. One other chart by DShort.com worth seeing is the “Three Mega-Bears” chart, which treats the recent decline as part of the decline from the “Tech Wreck” on the basis that the peak pre-August 2007 did not exceed the peak pre-Tech Wreck after adjusting for inflation:

It’s interesting for me because it compares the Dow Crash of 1929 (from which Graham forged his “Net Net” strategy) to the present experience in the US and Japan (both of which offer the most Net-Net opportunities globally). Where are we going from here? Que sais-je? The one thing I do know is that 10 more years of a down or sideways market is, unfortunately, a real possibility.

Read Full Post »

Regular readers of Greenbackd know that I’m no fan of “the narrative,” which is the story an investor concocts to explain the various pieces of data the investor gathers about a potential investment. It’s something I’ve been thinking about a great deal recently as I grapple with the merits of an investment in Japanese net current asset value stocks. The two arguments for and against investing in such opportunities are as follows:

Fer it: Net current asset value stocks have performed remarkably well throughout the investing world and over time. In support of this argument I cite generally Graham’s experience, Oppenheimer’s Ben Graham’s Net Current Asset Values: A Performance Update paper, Testing Ben Graham’s Net Current Asset Value Strategy in London, a paper from the business school of the University of Salford in the UK, and, more specifically, Bildersee, Cheh and Zutshi’s The performance of Japanese common stocks in relation to their net current asset values, James Montier’s Graham’’s net-nets: outdated or outstanding?, and Dylan Grice’s Are Japanese equities worth more dead than alive.

Agin it: Japan is a special case because it has weak shareholder rights and a culture that regards corporations as “social institutions with a duty to provide stable employment and consider the needs of employees and the community at large, not just shareholders.” In support of this argument I cite the recent experiences of activist investors in Japan, and Bildersee, Cheh and Zutshi’s The performance of Japanese common stocks in relation to their net current asset values (yes, it supports both sides of the argument). Further, the prospects for Japan’s economy are poor due to its large government debt and ageing population.

How to break the deadlock? Montier provides a roadmap in his excellent Behavioural Investing:

We appear to use stories to help us reach decisions. In the ‘rational’ view of the world we observe the evidence, we then weigh the evidence, and finally we come to our decision. Of course, in the rational view we all collect the evidence in a well-behaved unbiased fashion. … Usually we are prone to only look for the information that happens to agree with us (confirmatory bias), etc.

However, the real world of behaviour is a long way from the rational viewpoint, and not just in the realm of information gathering. The second stage of the rational decision is weighing the evidence. However, as the diagram below shows, a more commonly encountered approach is to construct a narrative to explain the evidence that has been gathered (the story model of thinking).

Hastie and Pennington (2000) are the leading advocates of the story view (also known as explanation-based decision-making). The central hypothesis of the explanation-based view is that the decision maker constructs a summary story of the evidence and then uses this story, rather than the original raw evidence, to make their final decision.

All too often investors are sucked into plausible sounding story. Indeed, underlying some of the most noted bubbles in history are kernels of truth.

As to the last point, arguably, the converse is also true. Investors have missed some great returns because the ugly stories about companies or markets were so compelling.

There are several points that are not contentious about an investment in Japan. The data suggests to me and to everyone else that there are a large number of net current asset value bargains available there. The contention is whether these net current asset value stocks will perform as they have in other countries, or whether they are destined to remain net current asset value bargains, the classic “value traps.” My own penchant for value investing, and quantitative value investing in particular, makes this a reasonably simple matter to resolve. I am going to invest in Japanese net current asset value stocks. Here are the bases for my reasoning:

  • I believe that value investing works. I believe that this is the case because it appeals to me as a matter of logic. I also believe that the data supports this position (see Ben Graham’s Net Current Asset Values: A Performance Update or Lakonishok, Shleifer, and Vishny’s Contrarian Investment, Extrapolation and Risk). Where a stock trades at a significant discount to its value, I am going to take a position.
  • I believe that Graham’s net current asset value works. In support of this proposition I cite the papers listed in the “Fer it” argument above.
  • I believe that simple quantitative models consistently outperform expert judgements. In support of this proposition generally I cite James Montier’s Painting By Numbers: An Ode To Quant. Where the data looks favorable to me, I am going to take a position, and I’m going to ignore the qualitative factors.
  • I believe that value is a good predictor of returns at a market level. In support I cite the Dimson, Marsh and Staunton research. I am not dissuaded from investing in a country simply because its growth prospects are low. Value is the signal predictor of returns.

The arguments militating against investing in Japan sound to me like the arguments militating against any investment in a NCAV stock, which is to say that they are arguments rooted in the narrative. I’ve never taken a position in a NCAV stock that had a good story attached to it. They have always looked ugly from an earnings or narrative perspective (otherwise, they’d be trading at a higher price). As far as I can tell, this situation is no different, other than the fact that it is in a different country and the country has economic problems (which I would ignore in the usual case anyway). While the research specific to NCAV stocks in Japan is not as compelling as I would like it to be, I always bear in mind the lessons of Taleb’s “naive empiricist,” which is to say that the data are useful only up to a point.

This is not to say that I have any great conviction about Japan or Japanese net current asset value stocks. Far from it. I fully expect, as I always do when taking a position in any stock, to be wrong and have the situation follow the narrative. Fortunately, the decision is out of my hands. I’m going to follow my simple quantitative model – the Graham net current asset value strategy – and take some positions in Japanese net nets. The rest is for the goddess Fortuna.

Read Full Post »

In “Black box” blues I argued that automated trading was a potentially dangerous element to include in a quantitative investment strategy, citing the “program trading / portfolio insurance” crash of 1987. When the market started falling in 1987 the computer programs caused the writers of derivatives to sell on every down-tick, which some suggest exacerbated the crash. Here’s New York University’s Richard Sylla discussing the causes (care of Wikipedia).

The internal reasons included innovations with index futures and portfolio insurance. I’ve seen accounts that maybe roughly half the trading on that day was a small number of institutions with portfolio insurance. Big guys were dumping their stock. Also, the futures market in Chicago was even lower than the stock market, and people tried to arbitrage that. The proper strategy was to buy futures in Chicago and sell in the New York cash market. It made it hard — the portfolio insurance people were also trying to sell their stock at the same time.

The Economist’s Buttonwood column has an article, Model behaviour: The drawbacks of automated trading, which argues along the same lines that automated trading is potentially problematic where too many managers follow the same approach:

[If] you feed the same data into computers in search of anomalies, they are likely to come up with similar answers. This can lead to some violent market lurches.

Buttonwood divides the quantitative approaches to investing into at three different types and their potential for providing a stabilizing influence on the market or throwing fuel on the fire in a crash:

1. Trend-following, the basis of which is that markets have “momentum”:

The model can range across markets and go short (bet on falling prices) as well as long, so the theory is that there will always be some kind of trend to exploit. A paper by AQR, a hedge-fund group, found that a simple trend-following system produced a 17.8% annual return over the period from 1985 to 2009. But such systems are vulnerable to turning-points in the markets, in which prices suddenly stop rising and start to fall (or vice versa). In late 2009 the problem for AHL seemed to be that bond markets and currencies, notably the dollar, seemed to change direction.

2. Value, which seeks securities that are  cheap according to “a specific set of criteria such as dividend yields, asset values and so on:”

The value effect works on a much longer time horizon than momentum, so that investors using those models may be buying what the momentum models are selling. The effect should be to stabilise markets.

3.  Arbitrage, which exploits price differentials between securities where no such price differential should exist:

This ceaseless activity, however, has led to a kind of arms race in which trades are conducted faster and faster. Computers now try to take advantage of arbitrage opportunities that last milliseconds, rather than hours. Servers are sited as close as possible to stock exchanges to minimise the time taken for orders to travel down the wires.

In arguing that automated trading can be problematic where too many managers pursue the same strategy, Buttonwood gives the example of the August 2007 crash, which sounds eerily similar to Sylla’s explanation for the 1987 crash above:

A previous example occurred in August 2007 when a lot of them got into trouble at the same time. Back then the problem was that too many managers were following a similar approach. As the credit crunch forced them to cut their positions, they tried to sell the same shares at once. Prices fell sharply and portfolios that were assumed to be well-diversified turned out to be highly correlated.

It is interesting that over-crowding is the same problem identified by GSAM in Goldman Claims Momentum And Value Quant Strategies Now Overcrowded, Future Returns Negligible. In that presentation, Robert Litterman, Goldman Sachs’ Head of Quantitative Resources, said:

Computer-driven hedge funds must hunt for new areas to exploit as some areas of making money have become so overcrowded they may no longer be profitable, according to Goldman Sachs Asset Management. Robert Litterman, managing director and head of quantitative resources, said strategies such as those which focus on price rises in cheaply-valued stocks, which latch onto market momentum or which trade currencies, had become very crowded.

Litterman argued that only special situations and event-driven strategies that focus on mergers or restructuring provide opportunities for profit (perhaps because these strategies require human judgement and interaction):

What we’re going to have to do to be successful is to be more dynamic and more opportunistic and focus especially on more proprietary forecasting signals … and exploit shorter-term opportunistic and event-driven types of phenomenon.

As we’ve seen before, human judgement is often flawed. Buttonwood says:

Computers may not have the human frailties (like an aversion to taking losses) that traditional fund managers display. But turning the markets over to the machines will not necessarily make them any less volatile.

And we’ve come full circle: Human’s are flawed, computers are the answer. Computers are flawed, humans are the answer. How to break the deadlock? I think it’s time for Taleb’s skeptical empiricist to emerge. More to come.

Read Full Post »

The New Yorker has John Cassidy’s interview with Richard Thaler, Chicago School economist and co-author (along with Werner F.M. DeBondt) of Further Evidence on Investor Overreaction and Stock Market Seasonality, a paper I like to cite in relation to low P/B quintiles and earnings mean reversion. Thaler is also the “Thaler” in Fuller & Thaler Asset Management, which James Montier identifies in his 2006 research report Painting By Numbers: An Ode To Quant as being a “fairly normal” quantitative fund (as opposed to being “rocket scientist uber-geeks”) with an “admirable track [record] in terms of outperformance.” I diverge from Thaler on a number of issues, but on these two I think he’s right:

On the remnants of efficient markets hypothesis:

Well, I always stress that there are two components to the theory. One, the market price is always right. Two, there is no free lunch: you can’t beat the market without taking on more risk. The no-free-lunch component is still sturdy, and it was in no way shaken by recent events: in fact, it may have been strengthened. Some people thought that they could make a lot of money without taking more risk, and actually they couldn’t. So either you can’t beat the market, or beating the market is very difficult—everybody agrees with that. My own view is that you can [beat the market] but it is difficult.

The question of whether asset prices get things right is where there is a lot of dispute. Gene [Fama] doesn’t like to talk about that much, but it’s crucial from a policy point of view. We had two enormous bubbles in the last decade, with massive consequences for the allocation of resources.

On stock market bubbles:

[Cassidy] When I spoke to Fama, he said he didn’t know what a bubble is—he doesn’t even like the term.

[Thaler] I think we know what a bubble is. It’s not that we can predict bubbles—if we could we would be rich. But we can certainly have a bubble warning system. You can look at things like price-to-earnings ratios, and price-to-rent ratios. These were telling stories, and the story they seemed to be telling was true.

And I love this line in relation to the impact of the recent crisis on behavioral economics:

I think it is seen as a watershed, but we have had a lot of watersheds. October 1987 was a watershed. The Internet stock bubble was a watershed. Now we have had another one. What is the old line—that science progresses funeral by funeral? Nobody changes their mind.

Science progresses funeral by funeral. Nobody changes their mind. It seems to me it’s not the only discipline that proceeds by funeral.

Read Full Post »

« Newer Posts