Feeds:
Posts
Comments

Archive for the ‘Quantitative investment’ Category

headerBkgnd

We wrote an article for the April issue of Value Investing Letter giving an overview of Quantitative Value, discussing the quantitative value model outlined in the book, and applying it to Apple Inc. (AAPL). It’s been smashed up since then, and there was also some big news yesterday — which is that AAPL is going to return $100 billion to its shareholders by the end of 2015 – so I’m highlighting it here. To put that $100 billion capital return in context, AAPL closed Tuesday with a market capitalization of $380 billion. Incredibly, its $145 billion cash pile won’t shrink because the new buyback brings its return of capital up to about the level of its current free cash flow. Weirdly, it’s now regarded as the “animal investors like least: a slow-growing tech stock.” From our earlier article:

We ran our model on March 13, 2013, finding Apple Inc. (AAPL) to be one of the highest quality stocks in the bargain bin. AAPL designs, manufactures and markets a variety of mobile devices, including the iPhone, iPad, and iPod, along with Mac products, operating systems, cloud products, related software and services, and many other products. Its devices are ubiquitous, and are catnip to consumers, driving one of the most valuable brands in the world. Why has the company shed over a third of its market capitalization since peaking near $700 per share in September of 2012?

In short, this former hedge fund darling has become the company that everyone loves to hate. iPod and Mac sales are down from last year. The media has pounced on reports of weakness in the sale of the iPhone 5 and now questions whether AAPL will be competitive with the newest smartphones. The market did not react well to AAPL’s latest earnings announcement, and dozens of analysts have reduced their price targets over the past few months. So what’s going on here? Is AAPL again headed for the technology dustbin of history? Or might this be a manifestation of investors’ behavioral bias?

Our model leads us to believe that AAPL offers exceptional franchise characteristics and is statistically cheap, with an EBIT/TEV yield of nearly 21 percent, which is among the very cheapest within the cheapest decile of stocks in the market. Below are some additional highlights from the quantitative output of our screens, which will give the reader a high-level view of the company’s profile, and then we will dig deeper on some details. Clearly, the fact that Mr. Market is offering us a company of this quality at this price should raise some questions.

AAPL Summary Statistics (As At March 13, 2013)

(Click to enlarge)

AAPL

To continue reading the article please click here.

Order Quantitative Value from Wiley FinanceAmazon, or Barnes and Noble.

Click here if you’d like to read more on Quantitative Value, or connect with me on LinkedIn.

No position.

About these ads

Read Full Post »

AQR’s Cliff Asness released in November last year a great piece called, “An Old Friend: The Stock Market’s Shiller P/E (.pdf)” dealing with some of the “current controversy” around the Shiller PE, most notably that the real earnings used in the Shiller PE are lower than they would otherwise be because of two serious earnings recessions: the tail end of the 2000-2002 recession, and the monster 2008 financial crisis.

The Shiller P/E represents what an investor pays for the last 10 years’ average real S&P 500 earnings. The ten-year average is believed to be a more stable measure than a P/E based on a single year of earnings, and therefore more predictive of long-term future stock returns and earnings. Asness notes that the selection of a ten-year average is arbitrary (“You would be hard-pressed to find a theoretical argument favoring it over, say, nine or 12 years”), but believes that it is “reasonable and intuitive.”

Asness asks, “[W]hy do some people dismiss today’s high Shiller P/E, saying it’s not a problem? Why do they forecast much higher long-term real stock returns than implied by the Shiller P/E?”:

They point out that we had two serious earnings recessions recently (though only the tail end of the 2000-2002 event makes it into today’s Shiller P/E), including one that was a doozy following the 2008 financial crisis.

So we have to ask ourselves, is the argument against using the Shiller P/E today right? Are the past 10 years of real earnings too low to be meaningful going forward (meaning the current Shiller P/E is biased too high)?

Asness shows the following chart of a rolling average of 10-year real S&P 500 earnings (a backwards looking 10-year average):

Asness 10 Year Rolling Average

The chart demonstrates that 10-year real earnings used in the Shiller P/E are currently slightly above their long-term trend. At their low after the financial crisis, they fell back to approximately long-term trend. Asness comments:

It has not, in fact, been a bad prior decade for real earnings! The core argument of today’s Shiller P/E critics is just wrong.

While the graph speaks for itself, there is some logic to go with the picture. Critics of the Shiller P/E point to the earnings destruction right after 2008 and ask how we can average in that period and think we have a meaningful number? After all, aren’t we averaging in a once-in-a-hundred-year event? But they usually do not object at all to the very high earnings, for several years, right before the bubble popped in 2008. One view of earnings is that the 2008 event stands alone. It didn’t have to happen, and doesn’t have relevance to the future and should be excluded from our calculations lest it bias us to be sour pusses. That is not my view (granted I’m a bit biased to sour puss in general). Another very different view is that the earnings destruction post 2008 was making up for some earnings that, for several years prior, were “too high”, essentially borrowed from the future. In this case, the post 2008 destruction is valid for inclusion as it’s simply correcting a past wrong. Rather than invalidate the Shiller method, the 2008 earnings destruction following the prior earnings boom is precisely why the CAPE was created! Not surprisingly I fall into this latter camp.

I think the above graph is a TKO. Those who say the Shiller P/E is currently “broken” have been knocked out.

So, according to Cliff Asness, despite the recessions in 2000-2002 and 2008, the real ten-year average of earnings used in the Shiller PE is slightly above its long-term trend.  Note that the current Shiller PE multiple of 23.5 is also about 42 percent above its long-term average of 16.5. Together, these two observations make the market look very expensive indeed.

Read An Old Friend: The Stock Market’s Shiller P/E (.pdf).

Order Quantitative Value from Wiley FinanceAmazon, or Barnes and Noble.

Click here if you’d like to read more on Quantitative Value, or connect with me on LinkedIn.

Read Full Post »

In the How to beat The Little Book That Beats The Market (Part 1 2, and 3) series of posts I showed how in Quantitative Value we tested Joel Greenblatt’s Magic Formula (outlined in The Little Book That (Still) Beats the Market) and found that it had consistently outperformed the market, and with lower relative risk than the market.

We sought to improve on it by creating a generic, academic alternative that we called “Quality and Price.” Quality and Price is the academic alternative to the Magic Formula because it draws its inspiration from academic research papers. We found the idea for the quality metric in an academic paper by Robert Novy-Marx called The Other Side of Value: Good Growth and the Gross Profitability Premium. Quality and Price substitutes for the Magic Formula’s ROIC a quality measure called gross profitability to total assets (GPA), defined as follows:

GPA = (Revenue − Cost of Goods Sold) / Total Assets

In Quality and Price, the higher a stock’s GPA, the higher the quality of the stock.

The price ratio, drawn from the early research into value investment by Eugene Fama and Ken French, is book value-to-market capitalization (BM), defined as follows:

BM = Book Value / Market Price

The Quality and Price strategy, like the Magic Formula, seeks to differentiate between stocks by equally weighting the quality and price metrics. Can we improve performance by seeking higher quality stocks in the value decile, rather than equal weighting the two factors?

In his paper The Quality Dimension of Value Investing, Novy-Marx considered this question. Novy-Marx’s rationale:

Value investors can also improve their performance by controlling for quality when investing in value stocks. Traditional value strategies formed on price signals alone tend to be short quality, because cheap firms are on average of lower quality than similar firms trading at higher prices. Because high quality firms on average outperform low quality firms, this quality deficit drags down the returns to traditional value strategies. The performance of value strategies can thus be significantly improved by explicitly controlling for quality when selecting stocks on the basis of price. Value strategies that buy (sell) cheap (expensive) firms from groups matched on the quality dimension significantly outperform value strategies formed solely on the basis of valuations.

His backtest method:

The value strategy that controls for quality is formed by first sorting the 500 largest financial firms each June into 10 groups of 50 on the basis of the quality signal. Within each of these deciles, which contain stocks of similar quality, the 15 with the highest value signals are assigned to the high portfolio, while the 15 with the lowest value signals are assigned to the low portfolio. This procedure ensures that the value and growth portfolios, which each hold 150 stocks, contain stocks of similar average quality.

Novy-Marx finds that the strategy “dramatically outperform[s]” portfolios formed on the basis of quality or value alone, but underperforms the Greenblatt-style joint strategy. From the paper:

The long/short strategy generated excess returns of 45 basis points per month, 50% higher than the 31 basis points per month generated by the unconditional quality strategy, despite running at lower volatility (10.4% as opposed to 12.2%). The long side outperformed the market by 32 basis points per month, 9 basis points per month more than the long-only strategy formed without regard for price. It managed this active return with a market tracking error volatility of only 5.9%, realizing an information ratio of 0.63, much higher than the information ratio of 0.42 realized on the tracking error of the unconditional long-only value strategy.

For comparison, Novy-Marx finds the Greenblatt-style joint 50/50 weighting generates higher returns:

The long/short strategy based on the joint quality and value signal generated excess returns of 61 basis points per month, twice that generated by the quality or value signals alone and a third higher than the market, despite running at a volatility of only 9.7%. The strategy realized a Sharpe ratio 0.75 over the sample, almost two and a half times that on the market over the same period, despite trading exclusively in the largest, most liquid stocks.

The long side outperformed the market by 35 basis points per month, with a tracking error volatility of only 5.7 percent, for a realized information ratio of 0.75. This information ratio is 15% higher than the 0.65 achieved running quality and value side by side. Just as importantly, it allows long-only investors to achieve a greater exposure to the high information ratio opportunities provided by quality and value. While the strategy’s 5.7% tracking error still provides a suboptimally small exposure to value and quality, this exposure is significantly larger than the long-only investor can obtain running quality alongside value.

And a pretty chart from the paper:

Novy-Marx 2.1

We tested the decile approach and the joint approach in Quantitative Value, substituting better performing value metrics and found different results. I’ll cover those next.

Read Full Post »

In March 1976 a Mr. Hartman L. Butler, Jr., C.F.A. sat down for an hour long interview with Benjamin Graham in his home in La Jolla, California. Hartman recorded the discussion on his cassette tape recorder, and transcribed it into the following document. There are many great insights from Graham. Here are several of the best parts:

On the GEICO disaster unfolding at the time: 

It makes me shudder to think of the amounts of money they were able to lose in one year. Incredible! It is surprising how many of the large companies have managed to turn in losses of $50 million or $100 million in one year, in these last few years. Something unheard of in the old days. You have to be a genius to lose that much money.

On changes in his investment methodology, a subject we cover in detail in Quantitative Value:

I have lost most of the interest I had in the details of security analysis which I devoted myself to so strenuously for many years. I feel that they are relatively unimportant, which, in a sense, has put me opposed to developments in the whole profession. I think we can do it successfully with a few techniques and simple principles. The main point is to have the right general principles and the character to stick to them.

I have a considerable amount of doubt on the question of how successful analysts can be overall when applying these selectivity approaches. The thing that I have been emphasizing in my own work for the last few years has been the group approach. To try to buy groups of stocks that meet some simple criterion for being undervalued-regardless of the industry and with very little attention to the individual company. My recent article on three simple methods applied to common stocks was published in one of your Seminar Proceedings.

I am just finishing a 50-year study-the application of these simple methods to groups of stocks, actually, to all the stocks in the Moody’s Industrial Stock Group. I found the results were very good for 50 years. They certainly did twice as well as the Dow Jones. And so my enthusiasm has been transferred from the selective to the group approach. What I want is an earnings ratio twice as good as the bond interest ratio typically for most years. One can also apply a dividend criterion or an asset value criterion and get good results. My  research indicates the best results come from simple earnings criterions.

We looked at the performance of Graham’s simple strategy in Quantitative Value. For more see my overview piece, Examining Benjamin Graham’s Record: Skill Or Luck?

In Quantitative Value we identify several problems with Graham’s simple strategy and examine several other strategies that outperform Graham’s simple strategy.

Hat tip to Tim Melvin @timmelvin

Read Full Post »

Abnormal Returns’ Tadas Viskanta has posted a great interview with my Quantitative Value co-author, the Turnkey Analyst Wes Gray:

AR: You write in the book that there are two arguments for value investing: “logical and empirical.” It seems like the value investing community heavily emphasizes the former as opposed to the latter. Why do you think that is?

WG: Human beings tend to favor good stories over evidence, but this can lead to problems. As Mark Twain says, “All you need is ignorance and confidence and the success is sure.”

This tendency to embrace stories might help explain why being “logical” is more heavily relied upon by investors, – good logic makes as good story. Relying on the evidence, or being “empirical,” is under appreciated because it is sometimes counterintuitive.  I’m actually a big fan of a logical story backed by empirical data. This is the essence of our book Quantitative Value. We present a compelling story on the value investment philosophy, but at each step along our journey we pepper our analysis with empirical analysis and academic rigor.

AR: You note in the book the importance of Ben Graham and how a continued application of his “simple value strategy” would still generate profits today. Have you seen the recent video about him? He seems to have been as interesting a guy as he was investor/teacher.

WG: As Toby and I conducted background research for the book, we became more and more convinced that Ben Graham was the original systematic value investor. In Quantitative Value we backtest a strategy Graham suggested in the 1976 Medical Economics Journal titled “The Simplest Way to Select Bargain Stocks.” We show that Graham’s strategy performed just as well over the past 40 years as it did in the 50 years prior to 1976. This is a remarkable “out-of-sample” test and highlights the robustness of a systematic value investment approach.

With respect to your question on the video: the recent video circulating the web reinforces our belief that Graham was an empiricist by nature and relied heavily on the scientific method to make his decisions. I also find it interesting that his discussions are so focused on the fallibility of human decision-making ability. Many of the ideas and concepts Graham mentioned regarding human behavior have been backed by behavioral finance studies written the past 20 years. He was well ahead of his time.

AR: The value community loves to continue to claim Warren Buffett as a disciple. However today he would be best described as a “quality and price” investor more than anything. What is the relevance of how Warren Buffett’s approach to investing has changed over time?

WG: The irony here is that, on average, Warren Buffett’s “new” approach to value investing is inferior to the approach originally described by Ben Graham. Buffett describes an approach that is broader in perspective and allows an investor to move along the cheapness axis to capture high quality firms. Graham, who studied the actual data, was much more focused on absolute cheapness. This concept is highlighted in many of his recommended investment approaches, where the foundation of the strategy prescribed is to simply purchase stocks under a specific price point (e.g., P/E <10).

After studying data from the post-Graham era, we have come to the same conclusion as Graham: cheapness is everything; quality is a nice-to-have. For example, the risk-adjusted returns on the higher-priced, but very high quality firms (i.e., Buffett firms) are much worse on a risk-adjusted basis than the returns on a basket of the cheapest firms that are of extreme low quality (i.e., Graham cigar butts). In the end, if you aren’t exclusively digging in the bargain bin, you’re missing out on potential outperformance.

Read the rest of the interview here. As Tadas says, the answers are illuminating.

For more on Quantitative Value, read my overview here.

Read Full Post »

Last week I wrote about the performance of one of Benjamin Graham’s simple quantitative strategies over the 37 years he since he described it (Examining Benjamin Graham’s Record: Skill Or Luck?). In the original article Graham proposed two broad approaches, the second of which we examine in Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors. The first approach Graham detailed in the original 1934 edition of Security Analysis (my favorite edition)—“net current asset value”:

My first, more limited, technique confines itself to the purchase of common stocks at less than their working-capital value, or net-current asset value, giving no weight to the plant and other fixed assets, and deducting all liabilities in full from the current assets. We used this approach extensively in managing investment funds, and over a 30-odd year period we must have earned an average of some 20 per cent per year from this source. For a while, however, after the mid-1950’s, this brand of buying opportunity became very scarce because of the pervasive bull market. But it has returned in quantity since the 1973–74 decline. In January 1976 we counted over 300 such issues in the Standard & Poor’s Stock Guide—about 10 per cent of the total. I consider it a foolproof method of systematic investment—once again, not on the basis of individual results but in terms of the expectable group outcome.

In 2010 I examined the performance of Graham’s net current asset value strategy with Sunil Mohanty and Jeffrey Oxman of the University of St. Thomas. The resulting paper is embedded below:

While Graham found this strategy was “almost unfailingly dependable and satisfactory,” it was “severely limited in its application” because the stocks were too small and infrequently available. This is still the case today. There are several other problems with both of Graham’s strategies. In Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors Wes and I discuss in detail industry and academic research into a variety of improved fundamental value investing methods, and simple quantitative value investment strategies. We independently backtest each method, and strategy, and combine the best into a sample quantitative value investment model.

The book can be ordered from Wiley FinanceAmazon, or Barnes and Noble.

[I am an Amazon Affiliate and receive a small commission for the sale of any book purchased through this site.]

Read Full Post »

Two recent articles, Was Benjamin Graham Skillful or Lucky? (WSJ), and Ben Graham’s 60-Year-Old Strategy Still Winning Big (Forbes), have thrown the spotlight back on Benjamin Graham’s investment strategy and his record. In the context of Michael Mauboussin’s new book The Success Equation, Jason Zweig asks in his WSJ Total Return column whether Graham was lucky or skillful, noting that Graham admitted he had his fair share of luck:

We tend to think of the greatest investors – say, Peter Lynch, George Soros, John Templeton, Warren Buffett, Benjamin Graham – as being mostly or entirely skillful.

Graham, of course, was the founder of security analysis as a profession, Buffett’s professor and first boss, and the author of the classic book The Intelligent Investor. He is universally regarded as one of the best investors of the 20th century.

But Graham, who outperformed the stock market by an annual average of at least 2.5 percentage points for more than two decades, coyly admitted that much of his remarkable track record may have been due to luck.

John Reese, in his Forbes’ Intelligent Investing column, notes that Graham’s Defensive Investor strategy has continued to outpace the market over the last decade:

Known as the “Father of Value Investing”—and the mentor of Warren Buffett—Graham’s investment firm posted annualized returns of about 20% from 1936 to 1956, far outpacing the 12.2% average return for the broader market over that time.

But the success of Graham’s approach goes far beyond even that lengthy period. For nearly a decade, I have been tracking a portfolio of stocks picked using my Graham-inspired Guru Strategy, which is based on the “Defensive Investor” criteria that Graham laid out in his 1949 classic, The Intelligent Investor. And, since its inception, the portfolio has returned 224.3% (13.3% annualized) vs. 43.0% (3.9% annualized) for the S&P 500.

Even with all of the fiscal cliff and European debt drama in 2012, the Graham-based portfolio has had a particularly good year. While the S&P 500 has notched a solid 13.7% gain (all performance figures through Dec. 17), the Graham portfolio is up more than twice that, gaining 28.5%.

Reese’s experiment might suggest that Graham is more skillful than lucky.

In our recently released book, Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors, Wes and I examine one of Graham’s simple strategies in the period after he described it to the present day. Graham gave an interview to the Financial Analysts Journal in 1976, some 40 year after the publication of Security Analysis. He was asked whether he still selected stocks by carefully studying individual issues, and responded:

I am no longer an advocate of elaborate techniques of security analysis in order to find superior value opportunities. This was a rewarding activity, say, 40 years ago, when our textbook “Graham and Dodd” was first published; but the situation has changed a great deal since then. In the old days any well-trained security analyst could do a good professional job of selecting undervalued issues through detailed studies; but in the light of the enormous amount of research now being carried on, I doubt whether in most cases such extensive efforts will generate sufficiently superior selections to justify their cost. To that very limited extent I’m on the side of the “efficient market” school of thought now generally accepted by the professors.

Instead, Graham proposed a highly simplified approach that relied for its results on the performance of the portfolio as a whole rather than on the selection of individual issues. Graham believed that such an approach “[combined] the three virtues of sound logic, simplicity of application, and an extraordinarily good performance record.”

Graham said of his simplified value investment strategy:

What’s needed is, first, a definite rule for purchasing which indicates a priori that you’re acquiring stocks for less than they’re worth. Second, you have to operate with a large enough number of stocks to make the approach effective. And finally you need a very definite guideline for selling.

What did Graham believe was the simplest way to select value stocks? He recommended that an investor create a portfolio of a minimum of 30 stocks meeting specific price-to-earnings criteria (below 10) and specific debt-to-equity criteria (below 50 percent) to give the “best odds statistically,” and then hold those stocks until they had returned 50 percent, or, if a stock hadn’t met that return objective by the “end of the second calendar year from the time of purchase, sell it regardless of price.”

Graham said that his research suggested that this formula returned approximately 15 percent per year over the preceding 50 years. He cautioned, however, that an investor should not expect 15 percent every year. The minimum period of time to determine the likely performance of the strategy was five years.

Graham’s simple strategy sounds almost too good to be true. Sure, this approach worked in the 50 years prior to 1976, but how has it performed in the age of the personal computer and the Internet, where computing power is a commodity, and access to comprehensive financial information is as close as the browser? We decided to find out. Like Graham, Wes and I used a price-to-earnings ratio cutoff of 10, and we included only stocks with a debt-to-equity ratio of less than 50 percent. We also apply his trading rules, selling a stock if it returned 50 percent or had been held in the portfolio for two years.

Figure 1.2 below taken from our book shows the cumulative performance of Graham’s simple value strategy plotted against the performance of the S&P 500 for the period 1976 to 2011:

Graham Strategy

Amazingly, Graham’s simple value strategy has continued to outperform.

Table 1.2 presents the results from our study of the simple Graham value strategy:

Graham Chart

Graham’s strategy turns $100 invested on January 1, 1976, into $36,354 by December 31, 2011, which represents an average yearly compound rate of return of 17.80 percent—outperforming even Graham’s estimate of approximately 15 percent per year. This compares favorably with the performance of the S&P 500 over the same period, which would have turned $100 invested on January 1, 1976, into $4,351 by December 31, 2011, an average yearly compound rate of return of 11.05 percent. The performance of the Graham strategy is attended by very high volatility, 23.92 percent versus 15.40 percent for the total return on the S&P 500.

The evidence suggests that Graham’s simplified approach to value investment continues to outperform the market. I think it’s a reasonable argument for skill on the part of Graham.

It’s useful to consider why Graham’s simple strategy continues to outperform. At a superficial level, it’s clear that some proxy for price—like a P/E ratio below 10—combined with some proxy for quality—like a debt-to-equity ratio below 50 percent—is predictive of future returns. But is something else at work here that might provide us with a deeper understanding of the reasons for the strategy’s success? Is there some other reason for its outperformance beyond simple awareness of the strategy? We think so.

Graham’s simple value strategy has concrete rules that have been applied consistently in our study. Even through the years when the strategy underperformed the market  our study assumed that we continued to apply it, regardless of how discouraged or scared we might have felt had we actually used it during the periods when it underperformed the market. Is it possible that the very consistency of the strategy is an important reason for its success? We believe so. A value investment strategy might provide an edge, but some other element is required to fully exploit that advantage.

Warren Buffett and Charlie Munger believe that the missing ingredient is temperament. Says Buffett, “Success in investing doesn’t correlate with IQ once you’re above the level of 125. Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble in investing.”

Was Graham skillful or lucky? Yes. Does the fact that he was lucky detract from his extraordinary skill? No because he purposefully concentrated on the undervalued tranch of stocks that provide asymmetric outcomes: good luck in the fortunes of his holdings helped his portfolio disproportionately on the upside, and bad luck didn’t hurt his portfolio much on the downside. That, in my opinion, is strong evidence of skill.

For more like this, please see our new book, Quantitative Value: A Practitioner’s Guide to Automating Intelligent Investment and Eliminating Behavioral Errors.

Read Full Post »

On Monday I presented an expanded version of my white paper “Simple But Not Easy: The Case For Quantitative Value” to the UC Davis MBA value investing class.

Click the link to be taken to the UC Davis video:

Presentation to UC Davis Value Investing Class

A special thank you to the instructors Jacob Taylor, and Lonnie Rush, and UCD value investing class. Go Aggies!

Read Full Post »

Research Affiliates’ Jason Hsu has a new article Selling Hope (.pdf) in which he discusses the reason investors persist in the seemingly irrational behavior of paying high fees for active management despite the numerous studies that show most active managers fail to deliver “alpha” over time net of fees:

The empirical evidence that the average fund manager underperforms and the recent top-performing funds do not outperform subsequently are irrefutable. Why, then, do investors insist on paying for investment management expertise, which isn’t all that useful? Perhaps investors are not really that interested in holding their investment managers accountable for outperformance. The Economist’s Buttonwood column 5 argues that investors might only be interested in securing advice that confirms their own investment beliefs. The false sense of security that comes from hearing a “professional” concurring with one’s own opinions on unpredictable affairs makes the randomness that is inherent in investing almost tolerable. Clearly, not all aspects of investment management are related to generating outperformance; many managers and advisors are really in the business of preventing their clients from making bad financial decisions, such as overconcentrating the portfolio, trading excessively or making decisions under emotional distress. Barber and Odean, in their 2000 Journal of Finance paper, found that aggressive self-directed investors underperform the market by an average of 6.5% per annum.6 These investors simply own too few stocks and trade too much due to overconfidence in their own stock-picking and market-timing skills. Jason Zweig, in his 2002 investigative report, documented that retail mutual fund investors underperformed the average mutual fund by 4.7% per annum.7 Again, this poor result is driven by investors actively switching between funds and market-timing their investment contributions.

Read the article here (.pdf).

See an earlier post on fundamental indexation.

H/T Tom Brakke’s @researchpuzzler

Read Full Post »

Greenbackd has been quiet over the last few days while I finished “Simple But Not Easy,” my latest white paper for Eyquem (embedded below). If you want to receive similar future missives, shoot me an email at greenbackd at gmail dot com. Thoughts, criticisms, and questions are all welcome too.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 3,715 other followers

%d bloggers like this: