Further to my point that if your valuation models use forward estimates rather than twelve-month trailing data, you’re doing it wrong, here are the results of our Quantitative Value backtest on the use of consensus Institutional Brokers’ Estimate System (I/B/E/S) earnings forecasts of EPS for the fiscal year (available 1982 through 2010) for individual stock selection:
We analyze the compound annual growth rates of each price ratio over the 1964 to 2011 period for market capitalization–weighted decile portfolios.
…
The forward earnings estimate is the worst performed metric by a wide margin. The performance of the forward earnings estimate is uniformly poor, earning a compound annual growth rate of just 8.63 percent on average and underperforming the Standard & Poor’s (S&P) 500 by almost 1 percent per year. Investors are wise to shy away from analyst forward earnings estimates when making investment decisions.
We focus our analysis on historical valuation metrics in Quantitative Value and leave the forward earnings estimates to the promoters on Wall Street.
[…] lets you know, for example, that using forward earnings estimates is a bad idea. Research also shows that screening for good five-year earnings growth leads to below average […]
LikeLike
[…] translating it into a forecast because Value Line’s analysts — like most of Wall Street (see my post on forward earnings) — are on average too optimistic. Note that the 2009 projection has turned out to be roughly […]
LikeLike
Thanks Russell and Tobias for the conversation. This is an issue I’ve pondered for quite some time.
I think the core of the issue stems from contrasting concepts of value – fundamental vs relative. That is, buying a company for a reasonable discount from what you think it is worth, vs buying what appears statistically cheap relative to other options.
I consider using analyst estimates of forward earnings as a hybrid of the two, you are still relying on a market-based measure to determine a companies value. Like using a CAPM derived discount rate, the final number ends up being dependent on the mood of other market participants, and therefore liable to the various biases and inefficiencies of human behavior. So one solution would be to demand a greater margin of safety (that is, the gap between price and ‘value’) or increase your discount factor as Russell suggests, to counteract the overconfidence bias – or alternatively, using it as the basis of a more sophisticated measure of price/value (which I think is what Russell actually does) in a relative approach.
The only problem I see with this approach is that you are necessarily restricted to a certain (more efficient) subset of the market. If you look outside the top 600 odd companies in the ASX, prior results and traditional analysis are all you can go on.
If you don’t have estimates, I think there’s a great deal more ‘value’ to be gained by focusing higher up the income statement than npat when looking at prior results. I’ve not back-tested this, but I would expect things like the accruals anomaly work particularly well when there are no estimates of future earnings. If you add in some quality metrics (eg, to filter out miners over-investing), this tends to throw up situations where metrics like ROE may have been impeded by some temporary setback (which might affect your valuation models negatively), but where the underlying cash flow/quality of earnings remains strong, or small growing companies where cash flow is improving at a faster rate than earnings, and it’s just a matter of time before earnings (and therefore valuation) catch up. I find this sort of situation doesn’t show up all that often, but doing nothing seems to be a pretty important skill in this game!
LikeLike
Rob S, really enjoyed your comments. Thank you.
Although the 600 odd covered companies are of course more efficiently priced, there are still numerous opportunities for an outright value portfolio and even more for a relative portfolio approach. Both yield significant alpha over the log term and lend themselves to more the institutional style of investing. Which is where I coming from.
Outside the 600 on the ASX are generally very small / micro cap companies with poor liquidity (in general). So whilst the market is very inefficient, the ability to scale a portfolio here is not possible.
As a personal investor, I think it’s more than fine to focus on the opportunity set as you do. Indeed it’s what I do.
But if we are discussing institutional scalable strategies which still put most managers to shame, then I stand by my comments below and the quant-qaulity-value style filtering process.
It just makes sense to invest in quality issues when they are cheap. If you can quantify that, sit back and let me mkt do the rest.
LikeLike
You’ll find no disagreements with me there Russell, I don’t envy the job of institutional investors… (well, maybe a little bit!)
Thanks to people like Montier and sites like Greenbackd and Turnkey, I’m much less evangelical about ‘strong’ fundamental valuation methodologies, and much more open to systematic quant approaches that do their best to eliminate the various behavioural issues that come hardwired in the human brain.
Regardless, it’s a fascinating discipline, and I’m glad practitioners (and academics) pipe up occasionally and talk with the rest of us!
LikeLike
Have never undertaken tests of trailing earnings. Investing is about the future and importantly future earnings, so I have never seen the point in analysing yesterday.
Why use IBES? Simple. It is impossible to cover 600 companies (thats ASX coverage / much higher in the US) in the same detail as what IBES can provide.
I don’t see how I can add any value to the 10-20 analysts covering a company – using their consensus numbers gives an “approximately right, not precisely wrong” base to work from.
large investment banks have staff who just focus on a handful of companies or one sector each and every year and know them intimately. As a jack of all trades, I’d be arrogant in even suggesting I would have the time to model them as well.
Using their numbers, coming up with a valuation (and using this as a filtering process) allows me to focus my research efforts on those who are the cheapest. Very powerful.
Indeed, the results of the backtest in just taking the top 20-40 stocks on a monthly roll basis significantly outperforms my own biases and any benchmark you’d want to mash up against the returns. Hence why I’m such a fan of qaunt-quality processes.
LikeLike
How do you account in your model for the findings of multiple researchers that, despite all the work undertaken by those forecasters, their forecasts are too optimistic (see, for example, Roy Batchelor’s “Bias in macro economic forecasts,” McKinsey’s “Equity Analysts Are Still Too Bullish” – be sure to check out Exhibit 2, which is absolute shocker – and more recently JP Morgan Asset Management’s March 2013 chart in my post)?
Have you ever tested the performance of your model against your own selections from the model’s output? There’s a growing body of research that suggests that simple statistical models outperform expert judgements even when the experts are provided with the output of the model. See my white paper, “Simple But Not Easy: The Case For Quantitative Value” for the sources, most notably Greenblatt’s study.
LikeLike
Im aware of the work undertaken by McKinsey et.al and general over-optimism. Saying that, Im a practicioner not a researcher and again, given valuations are very rubbery numbers, the goal is to be ‘appoximately right, not precisely wrong’.
Flip our argument away from trying to be precise (you’ll never get it right all the time) and instead, think in general terms and avoiding the big losers as being the ultimate big picture goal to material out-performance.
Take any index, strip out 90-95% of the lowest quality most expensive issues and invest in the rest (well say the top 40 issues) and the alpha / out-performance overtime can be stunning.
And if you want to be conservative given said over-optimism, you can increase the discount rate you apply to these forward IBES estimates – in the end the result we aim for is the same. A portfolio of materially higher quality issues that are relatively better value / cheaper than the index we seek to out-perform.
I run one portfolio were the model is left virtually untouched. I do strip out companies with known short-term issues that the quant process throws up.
I run another in which I also conduct a significant amount of qualitative research on top of the quality-qaunt filter before a company passes and makes it into the portfolio. Definately, its very hard to keep up with the former approach. Though there is some ‘comfort’ in having the human overlay, Id agree with your note that statistical models outperform expert judgements even when the experts are provided with the output of the model. Though not to a wide degree to date.
No need to post again… Would be great one day to have US stocks run-through the models and backtested using IBES seeing its only being applied on OZ co.’s at present.
LikeLike
The issue is not with foward estimates, i find them very useful and much, much more then trailing earnings. “If Librians were investors they’d be the richest”
The issue is that your going about your tests the wrong way.
You cant expect to use a technique that has price as an input (p/e) and expect to get a rational valuation at the other end. Therefore your test is mute from day one.
I’ve got some very simple models (which use IBES). if the results are anywhere near what they derived on Australian stocks, youd look at the same forward looking estimates in a much different light.
LikeLike
Russell, have you conducted backtests on your models using IBES data and trailing twelve month data? Why would you include someone else’s estimate of earnings in your model? Why wouldn’t you do that work yourself?
LikeLike