Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Net Net Stock’

Wes Gray’s Turnkey Analyst has a guest post from Paul Sepulveda in which Paul  asks if it’s possible to improve net net returns by removing stocks with the highest risk of going to zero (the real losers).

Paul has an interesting approach:

My goal was to chop off the left tail of the distribution of returns. Piotroski uses his F-Score to achieve a similar goal among a universe of firms with low P/B (i.e., “value” firms). After collecting the data on recent net-net “cigar-butts”, I quickly realized something: about half of my list consisted of Chinese reverse-merger companies! These firms definitely had a decent shot of going to zero after shareholders realized Bernie Madoff was the CEO and Arthur Anderson was performing the audit work. I separated these companies from the remaining universe. For completeness, I also recorded market caps and Piotroski scores to create alternative net-net universes I could study.

Here are his results:

Paul has only six months of data, but the experiment is ongoing. He has some other interesting observations. See the rest of the post here.

Advertisements

Read Full Post »

I’m considering launching a subscription-only service aimed at identifying stocks similar to those in the old Wall Street’s Endangered Species reports. Like the old Wall Street’s Endangered Species reports, I’ll be seeking undervalued industrial companies where a catalyst in the form a buy-out, strategic acquisition, liquidation or activist campaign might emerge to close the gap between price and value. The main point of difference between the old Piper Jaffray reports and the Greenbackd version will be that I will also include traditional Greenbackd-type stocks (net nets, sub-liquidation values etc) to the extent that those type of opportunities are available. The cost will be between $500 and $1,000 per annum for 48 weekly emails with a list of around 30 to 50 stocks and some limited commentary.

If you would like to receive a free trial copy of the report if and when it is produced in exchange for providing feedback on its utility (or lack thereof), would you please send an email to greenbackd [at] gmail [dot] com. If there is sufficient interest in the report I’ll go ahead and produce the trial copy.

Read Full Post »

I’m a huge fan of James Montier’s work on the rationale for a quantitative investment strategy and global Graham net net investing. Miguel Barbosa of Simoleon Sense has a wonderful interview with Montier, covering his views on behavioral investing and value investment. Particularly interesting is Montier’s concept of “seductive details” and the implications for investors:

Miguel: Let’s talk about the concept of seductive details…can you give us an example of how investors are trapped by irrelevant information?

James Montier: The sheer amount of irrelevant information faced by investors is truly staggering. Today we find ourselves captives of the information age, anything you could possibly need to know seems to appear at the touch of keypad. However, rarely, if ever, do we stop and ask ourselves exactly what we need to know in order to make a good decision.

Seductive details are the kind of information that seems important, but really isn’t. Let me give you an example. Today investors are surrounded by analysts who are experts in their fields. I once worked with an IT analyst who could take a PC apart in front of you, and tell you what every little bit did, fascinating stuff to be sure, but did it help make better investment decisions, clearly not. Did the analyst know anything at all about valuing a company or a stock, I’m afraid not. Yet he was immensely popular because he provided seductive details.

Montier’s “seductive details” is reminiscent of the discussion in Nicholas Taleb’s Fooled by Randomness on the relationship between the amount of information available to experts, the accuracy of judgments they make based on this information, and the experts’ confidence in the accuracy of these judgements. Intuition suggests that having more information should increase the accuracy of predictions about uncertain outcomes. In reality, more information decreases the accuracy of predictions while simultaneously increasing the confidence that the prediction is correct. One such example is given in the paper The illusion of knowledge: When more information reduces accuracy and increases confidence (.pdf) by Crystal C. Hall, Lynn Ariss, and Alexander Todorov. In that study, participants were asked to predict basketball games sampled from a National Basketball Association season:

All participants were provided with statistics (win record, halftime score), while half were additionally given the team names. Knowledge of names increased the confidence of basketball fans consistent with their belief that this knowledge improved their predictions. Contrary to this belief, it decreased the participants’ accuracy by reducing their reliance on statistical cues. One of the factors contributing to this underweighting of statistical cues was a bias to bet on more familiar teams against the statistical odds. Finally, in a real betting experiment, fans earned less money if they knew the team names while persisting in their belief that this knowledge improved their predictions.

This is not an isolated example. In Effects of amount of information on judgment accuracy and confidence, by Claire I. Tsai, Joshua Klayman, and Reid Hastie, the authors examined two other studies that further that demonstrate when decision makers receive more information, their confidence increases more than their accuracy, producing “substantial confidence–accuracy discrepancies.” The CIA have also examined the phenomenon. In Chapter 5 of Psychology of Intelligence Analysis, Do you really need more information?, the author argues against “the often-implicit assumption that lack of information is the principal obstacle to accurate intelligence judgments:”

Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence.

Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do.

Click here to see the Simoleon Sense interview.

Read Full Post »

Jon Heller of the superb Cheap Stocks, one of the inspirations for this site, has published the results of his two year net net index experiment in Winding Down The Cheap Stocks 21 Net Net Index; Outperforms Russell Microcap by 1371 bps, S&P 500 by 2537 bps.

The “CS 21 Net/Net Index” was “the first index designed to track net/net performance.” It was a simply constructed, capitalization-weighted index comprising the 21 largest net nets by market capitalization at inception on February 15, 2008. Jon had a few other restrictions on inclusion in the index, described in his introductory post:

  • Market Cap is below net current asset value, defined as: Current Assets – Current Liabilities – all other long term liabilities (including preferred stock, and minority interest where applicable)
  • Stock Price above $1.00 per share
  • Companies have an operating business; acquisition companies were excluded
  • Minimum average 100 day volume of at least 5000 shares (light we know, but welcome to the wonderful world of net/nets)
  • Index constituents were selected by market cap. The index is comprised of the “largest” companies meeting the above criteria.

The Index is naïve in construction in that:

  • It will be rebalanced annually, and companies no longer meeting the net/net criteria will remain in the index until annual rebalancing.
  • Only bankruptcies, de-listings, or acquisitions will result in replacement
  • Does not discriminate by industry weighting—some industries may have heavy weights.

If a company was acquired, it was not replaced and the proceeds were simply held in cash. Further, stocks were not replaced if they ceased being net nets.

Says Jon of the CS 21 Net/Net Index performance:

This was simply an experiment in order to see how net/nets at a given time would perform over the subsequent two years.

The results are in, and while it was not what we’d originally hoped for, it does lend credence to the long-held notion that net/nets can outperform the broader markets.

The Cheap Stocks 21 Net Net Index finished the two year period relatively flat, gaining 5.1%. During the same period, The Russell Microcap Index was down 8.61%, while the Russell Microcap Index was down 9.9%. During the same period, the S&P 500 was down 20.27%.

Here are the components, including the weightings and returns of each:

Adaptec Inc (ADPT)
Weight: 18.72%
Computer Systems
+7.86%
Audiovox Corp (VOXX)
Weight: 12.20%
Electronics
-29.28%
Trans World Entertainment (TWMC)
Weight:7.58%
Retail-Music and Video
-69.55%
Finish Line Inc (FINL)
Weight:6.30%
Retail-Apparel
+350.83%
Nu Horizons Electronics (NUHC)
Weight:5.76%
Electronics Wholesale
-25.09%
Richardson Electronics (RELL)
Weight:5.09%
Electronics Wholesale
+43.27%
Pomeroy IT Solutions (PMRY)
Weight:4.61%
Acquired
-3.8%
Ditech Networks (DITC)
Weight:4.31%
Communication Equip
-56.67%
Parlux Fragrances (PARL)
Weight:3.92%
Personal Products
-51.39%
InFocus Corp (INFS)
Weight:3.81%
Computer Peripherals
Acquired
Renovis Inc (RNVS)
Weight:3.80%
Biotech
Acquired
Leadis Technology Inc (LDIS)
Weight:3.47%
Semiconductor-Integrated Circuits
-92.05%
Replidyne Inc (RDYN) became Cardiovascular Systems (CSII)
Weight:3.31%
Biotech
[Edit: +126.36%]
Tandy Brands Accessories Inc (TBAC)
Weight:2.94%
Apparel, Footwear, Accessories
-57.79%
FSI International Inc (FSII)
Weight:2.87%
Semiconductor Equip
+66.47%
Anadys Pharmaceuticals Inc (ANDS)
Weight:2.49%
Biotech
+43.75%
MediciNova Inc (MNOV)
Weight:2.33%
Biotech
+100%
Emerson Radio Corp (MSN)
Weight:1.71%
Electronics
+118.19%
Handleman Co (HDL)
Weight:1.66%
Music- Wholesale
-88.67%
Chromcraft Revington Inc (CRC)
Weight:1.62%
Furniture
-54.58%
Charles & Colvard Ltd (CTHR)
Weight:1.50%
Jewel Wholesale
-7.41%

Cash Weight: 8.58%

Jon is putting together a new net net index, which I’ll follow if he releases it into the wild.

Read Full Post »

Jae Jun at Old School Value has updated his great post back-testing the performance of net current asset value (NCAV) against “net net working capital” (NNWC) by refining the back-test (see NCAV NNWC Backtest Refined). His new back-test increases the rebalancing period to 6 months from 4 weeks, excludes companies with daily volume below 30,000 shares, and introduces the 66% margin of safety to the NCAV stocks (I wasn’t aware that this was missing from yesterday’s back-test, and would explain why the performance of the NCAV stocks was so poor).

Jae Jun’s original back-test compared the performance of NCAV and NNWC stocks over the last three years. He calculated NNWC by discounting the current asset value of stocks in line with Graham’s liquidation value discounts, but excludes the “Fixed and miscellaneous assets” included by Graham. Here’s Jae Jun’s NNWC formula:

NNWC = Cash + (0.75 x Accounts receivables) + (0.5 x  Inventory)

Here’s Graham’s suggested discounts (extracted from Chapter XLIII of Security Analysis: The Classic 1934 Edition “Significance of the Current Asset Value”):

As I noted yesterday, excluding the “Fixed and miscellaneous assets” from the liquidating value calculation makes for an exceptionally austere valuation.

Jae Jun has refined his screening criteria as follows:

  • Volume is greater than 30k
  • NCAV margin of safety included
  • Slippage increased to 1%
  • Rebalance frequency changed to 6 months
  • Test period remains at 3 years

Here are Jae Jun’s back-test results with the new criteria:

For the period 2001 to 2004

For the period 2004 to 2007

For the period 2007 to 2010


It’s an impressive analysis by Jae Jun. Dividing the return into three periods is very helpful. While the returns overall are excellent, there were some serious smash-ups along the way, particularly the February 2007 to March 2009 period. As Klarman and Taleb have both discussed, it demonstrates that your starting date as an investor makes a big difference to your impression of the markets or whatever theory you use to invest. Compare, for example, the experiences of two different NCAV investors, one starting in February 2003 and the second starting in February 2007. The 2003 investor was up 500% in the first year, and had a good claim to possessing some investment genius. The 2007 investor was feeling very ill in March 2009, down around 75% and considering a career in truck driving. Both were following the same strategy, and so really had no basis for either conclusion. I doubt that thought consoles the trucker.

Jae Jun’s Old School Value NNWC NCAV Screen is available here (it’s free).

Read Full Post »

Jae Jun at Old School Value has a great post, NCAV NNWC Screen Strategy Backtest, comparing the performance of net current asset value stocks (NCAV) and “net net working capital” (NNWC) stocks over the last three years. To arrive at NNWC, Jae Jun discounts the current asset value of stocks in line with Graham’s liquidation value discounts, but excludes the “Fixed and miscellaneous assets” included by Graham. Here’s Jae Jun’s NNWC formula:

NNWC = Cash + (0.75 x Accounts receivables) + (0.5 x  Inventory)

Here’s Graham’s suggested discounts (extracted from Chapter XLIII of Security Analysis: The Classic 1934 Edition “Significance of the Current Asset Value”):

Excluding the “Fixed and miscellaneous assets” from the NNWC calculation provides an austere valuation indeed (it makes Graham look like a pie-eyed optimist, which is saying something). The good news is that Jae Jun’s NNWC methodology seems to have performed exceptionally well over the period analyzed.

Jae Jun’s back-test methodology was to create two concentrated portfolios, one of 15 stocks and the other of 10 stocks. He rolled the positions on a four-weekly basis, which may be difficult to do in practice (as Aswath Damodaran pointed out yesterday, many a slip twixt cup and the lip renders a promising back-tested strategy useless in the real world). Here’s the performance of the 15 stock portfolio:

“NNWC Incr.” is “NNWC Increasing,” which Jae Jun describes as follows:

NNWC is positive and the latest NNWC has increased compared to the previous quarter. In this screen, NNWC doesn’t have to be less than current market price. Since the requirement is that NNWC is greater than 0, most large caps automatically fail to make the cut due to the large quantity of intangibles, goodwill and total debt.

Both the NNWC and NNWC Increasing portfolios delivered exceptional returns, up 228% and 183% respectively, while the S&P500 was off 26%. The performance of the NCAV portfolio was a surprise, eeking out just a 5% gain over the period, which is nothing to write home about, but still significantly better than the S&P500.

The 10 stock portfolio’s returns are simply astonishing:

Jae Jun writes:

An original $100 would have become

  • NCAV: $103
  • NNWC: $544
  • NNWC Incr: $503
  • S&P500: $74

That’s a gain of over 400% for NNWC stocks!

Amazing stuff. It would be interesting to see a full academic study on the performance of NNWC stocks, perhaps with holding periods in line with Oppenheimer’s Ben Graham’s Net Current Asset Values: A Performance Update for comparison. You can see Jae Jun’s Old School Value NNWC NCAV Screen here (it’s free). He’s also provided a list of the top 10 NNWC stocks and top 10 stocks with increasing NNWC in the NCAV NNWC Screen Strategy Backtest post.

Buy my book The Acquirer’s Multiple: How the Billionaire Contrarians of Deep Value Beat the Market from on Kindlepaperback, and Audible.

Here’s your book for the fall if you’re on global Wall Street. Tobias Carlisle has hit a home run deep over left field. It’s an incredibly smart, dense, 213 pages on how to not lose money in the market. It’s your Autumn smart read. –Tom Keene, Bloomberg’s Editor-At-Large, Bloomberg Surveillance, September 9, 2014.

Click here if you’d like to read more on The Acquirer’s Multiple, or connect with me on Twitter, LinkedIn or Facebook. Check out the best deep value stocks in the largest 1000 names for free on the deep value stock screener at The Acquirer’s Multiple®.

 

Read Full Post »

Aswath Damodaran, a Professor of Finance at the Stern School of Business, has an interesting post on his blog Musings on Markets, Transaction costs and beating the market. Damodaran’s thesis is that transaction costs – broadly defined to include brokerage commissions, spread and the “price impact” of trading (which I believe is an important issue for some strategies) – foil in the real world investment strategies that beat the market in back-tests. He argues that transaction costs are also the reason why the “average active portfolio manager” underperforms the index by about 1% to 1.5%. I agree with Damodaran. The long-term, successful practical application of any investment strategy is difficult, and is made more so by all of the frictional costs that the investor encounters. That said, I see no reason why a systematic application of some value-based investment strategies should not outperform the market even after taking into account those transaction costs and taxes. That’s a bold statement, and requires in support the production of equally extraordinary evidence, which I do not possess. Regardless, here’s my take on Damodaran’s article.

First, Damodaran makes the point that even well-researched, back-tested, market-beating strategies underperform in practice:

Most of these beat-the-market approaches, and especially the well researched ones, are backed up by evidence from back testing, where the approach is tried on historical data and found to deliver “excess returns”. Ergo, a money making strategy is born.. books are written.. mutual funds are created.

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index. That number has remained surprisingly stable over the last four decades and has persisted through bull and bear markets. Worse, this under performance cannot be attributed to “bad” portfolio mangers who drag the average down, since there is very little consistency in performance. Winners this year are just as likely to be losers next year…

Then he explains why he believes market-beating strategies that work on paper fail in the real world. The answer? Transaction costs:

So, why do portfolios that perform so well in back testing not deliver results in real time? The biggest culprit, in my view, is transactions costs, defined to include not only the commission and brokerage costs but two more significant costs – the spread between the bid price and the ask price and the price impact you have when you trade. The strategies that seem to do best on paper also expose you the most to these costs. Consider one simple example: Stocks that have lost the most of the previous year seem to generate much better returns over the following five years than stocks have done the best. This “loser” stock strategy was first listed in the academic literature in the mid-1980s and greeted as vindication by contrarians. Later analysis showed, though, that almost all of the excess returns from this strategy come from stocks that have dropped to below a dollar (the biggest losing stocks are often susceptible to this problem). The bid-ask spread on these stocks, as a percentage of the stock price, is huge (20-25%) and the illiquidity can also cause large price changes on trading – you push the price up as you buy and the price down as you sell. Removing these stocks from your portfolio eliminated almost all of the excess returns.

In support of his thesis, Damodaran gives the example of Value Line and its mutual funds:

In perhaps the most telling example of slips between the cup and lip, Value Line, the data and investment services firm, got great press when Fischer Black, noted academic and believer in efficient markets, did a study where he indicated that buying stocks ranked 1 in the Value Line timeliness indicator would beat the market. Value Line, believing its own hype, decided to start mutual funds that would invest in its best ranking stocks. During the years that the funds have been in existence, the actual funds have underperformed the Value Line hypothetical fund (which is what it uses for its graphs) significantly.

Damodaran’s argument is particularly interesting to me in the context of my recent series of posts on quantitative value investing. For those new to the site, my argument is that a systematic application of the deep value methodologies like Benjamin Graham’s liquidation strategy (for example, as applied in Oppenheimer’s Ben Graham’s Net Current Asset Values: A Performance Update) or a low price-to-book strategy (as described in Lakonishok, Shleifer, and Vishny’s Contrarian Investment, Extrapolation and Risk) can lead to exceptional long-term investment returns in a fund.

When Damodaran refers to “the price impact you have when you trade” he highlights a very important reason why a strategy in practice will underperform its theoretical results. As I noted in my conclusion to Intuition and the quantitative value investor:

The challenge is making the sample mean (the portfolio return) match the population mean (the screen). As we will see, the real world application of the quantitative approach is not as straight-forward as we might initially expect because the act of buying (selling) interferes with the model.

A strategy in practice will underperform its theoretical results for two reasons:

  1. The strategy in back test doesn’t have to deal with what I call the “friction” it encounters in the real world. I define “friction” as brokerage, spread and tax, all of which take a mighty bite out of performance. These are two of Damodaran’s transaction costs and another – tax. Arguably spread is the most difficult to prospectively factor into a model. One can account for brokerage and tax in the model, but spread is always going to be unknowable before the event.
  2. The act of buying or selling interferes with the market (I think it’s a Schrodinger’s cat-like paradox, but then I don’t understand quantum superpositions). This is best illustrated at the micro end of the market. Those of us who traffic in the Graham sub-liquidation value boat trash learn to live with wide spreads and a lack of liquidity. We use limit orders and sit on the bid (ask) until we get filled. No-one is buying (selling) “at the market,” because, for the most part, there ain’t no market until we get on the bid (ask). When we do manage to consummate a transaction, we’re affecting the price. We’re doing our little part to return it to its underlying value, such is the wonderful phenomenon of value investing mean reversion in action. The back-test / paper-traded strategy doesn’t have to account for the effect its own buying or selling has on the market, and so should perform better in theory than it does in practice.

If ever the real-world application of an investment strategy should underperform its theoretical results, Graham liquidation value is where I would expect it to happen. The wide spreads and lack of liquidity mean that even a small, individual investor will likely underperform the back-test results. Note, however, that it does not necessarily follow that the Graham liquidation value strategy will underperform the market, just the model. I continue to believe that a systematic application of Graham’s strategy will beat the market in practice.

I have one small quibble with Damodaran’s otherwise well-argued piece. He writes:

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index.

There’s a little rhetorical sleight of hand in this statement (which I’m guilty of on occasion in my haste to get a post finished). Evidence that the “average active portfolio manager” does not beat the market is not evidence that these strategies don’t beat the market in practice. I’d argue that the “average active portfolio manager” is not using these strategies. I don’t really know what they’re doing, but I’d guess the institutional imperative calls for them to hug the index and over- or under-weight particular industries, sectors or companies on the basis of a story (“Green is the new black,” “China will consume us back to the boom,” “house prices never go down,” “the new dot com economy will destroy the old bricks-and-mortar economy” etc). Yes, most portfolio managers underperform the index in the order of 1% to 1.5%, but I think they do so because they are, in essence, buying the index and extracting from the index’s performance their own fees and other transaction costs. They are not using the various strategies identified in the academic or popular literature. That small point aside, I think the remainder of the article is excellent.

In conclusion, I agree with Damodaran’s thesis that transaction costs in the form of brokerage commissions, spread and the “price impact” of trading make many apparently successful back-tested strategies unusable in the real world. I believe that the results of any strategy’s application in practice will underperform its theoretical results because of friction and the paradox of Schrodinger’s cat’s brokerage account. That said, I still see no reason why a systematic application of Graham’s liquidation value strategy or LSV’s low price-to-book value strategy can’t outperform the market even after taking into account these frictional costs and, in particular, wide spreads.

Hat tip to the Ox.

Read Full Post »

Older Posts »

%d bloggers like this: