Archive for February, 2010

Jon Heller of the superb Cheap Stocks, one of the inspirations for this site, has published the results of his two year net net index experiment in Winding Down The Cheap Stocks 21 Net Net Index; Outperforms Russell Microcap by 1371 bps, S&P 500 by 2537 bps.

The “CS 21 Net/Net Index” was “the first index designed to track net/net performance.” It was a simply constructed, capitalization-weighted index comprising the 21 largest net nets by market capitalization at inception on February 15, 2008. Jon had a few other restrictions on inclusion in the index, described in his introductory post:

  • Market Cap is below net current asset value, defined as: Current Assets – Current Liabilities – all other long term liabilities (including preferred stock, and minority interest where applicable)
  • Stock Price above $1.00 per share
  • Companies have an operating business; acquisition companies were excluded
  • Minimum average 100 day volume of at least 5000 shares (light we know, but welcome to the wonderful world of net/nets)
  • Index constituents were selected by market cap. The index is comprised of the “largest” companies meeting the above criteria.

The Index is naïve in construction in that:

  • It will be rebalanced annually, and companies no longer meeting the net/net criteria will remain in the index until annual rebalancing.
  • Only bankruptcies, de-listings, or acquisitions will result in replacement
  • Does not discriminate by industry weighting—some industries may have heavy weights.

If a company was acquired, it was not replaced and the proceeds were simply held in cash. Further, stocks were not replaced if they ceased being net nets.

Says Jon of the CS 21 Net/Net Index performance:

This was simply an experiment in order to see how net/nets at a given time would perform over the subsequent two years.

The results are in, and while it was not what we’d originally hoped for, it does lend credence to the long-held notion that net/nets can outperform the broader markets.

The Cheap Stocks 21 Net Net Index finished the two year period relatively flat, gaining 5.1%. During the same period, The Russell Microcap Index was down 8.61%, while the Russell Microcap Index was down 9.9%. During the same period, the S&P 500 was down 20.27%.

Here are the components, including the weightings and returns of each:

Adaptec Inc (ADPT)
Weight: 18.72%
Computer Systems
Audiovox Corp (VOXX)
Weight: 12.20%
Trans World Entertainment (TWMC)
Retail-Music and Video
Finish Line Inc (FINL)
Nu Horizons Electronics (NUHC)
Electronics Wholesale
Richardson Electronics (RELL)
Electronics Wholesale
Pomeroy IT Solutions (PMRY)
Ditech Networks (DITC)
Communication Equip
Parlux Fragrances (PARL)
Personal Products
InFocus Corp (INFS)
Computer Peripherals
Renovis Inc (RNVS)
Leadis Technology Inc (LDIS)
Semiconductor-Integrated Circuits
Replidyne Inc (RDYN) became Cardiovascular Systems (CSII)
[Edit: +126.36%]
Tandy Brands Accessories Inc (TBAC)
Apparel, Footwear, Accessories
FSI International Inc (FSII)
Semiconductor Equip
Anadys Pharmaceuticals Inc (ANDS)
MediciNova Inc (MNOV)
Emerson Radio Corp (MSN)
Handleman Co (HDL)
Music- Wholesale
Chromcraft Revington Inc (CRC)
Charles & Colvard Ltd (CTHR)
Jewel Wholesale

Cash Weight: 8.58%

Jon is putting together a new net net index, which I’ll follow if he releases it into the wild.


Read Full Post »

One of the most interesting ideas suggested by Ian Ayers’s book Super Crunchers is the role of humans in the implementation of a quantitative investment strategy. As we know from Andrew McAfee’s Harvard Business Review blog post, The Future of Decision Making: Less Intuition, More Evidence, and James Montier’s 2006 research report, Painting By Numbers: An Ode To Quant, in context after context, simple statistical models outperform expert judgements. Further, decision makers who, when provided with the output of the simple statistical model, wave off the model’s predictions tend to make poorer decisions than the model. The reason? We are overconfident in our abilities. We tend to think that restraints are useful for the other guy but not for us. Ayres provides a great example in his article,  How computers routed the experts:

To cede complete decision-making power to lock up a human to a statistical algorithm is in many ways unthinkable.

The problem is that discretionary escape hatches have costs too. In 1961, the Mercury astronauts insisted on a literal escape hatch. They balked at the idea of being bolted inside a capsule that could only be opened from the outside. They demanded discretion. However, it was discretion that gave Liberty Bell 7 astronaut Gus Grissom the opportunity to panic upon splashdown. In Tom Wolfe’s memorable account, The Right Stuff, Grissom “screwed the pooch” when he prematurely blew the 70 explosive bolts securing the hatch before the Navy SEALs were able to secure floats. The space capsule sank and Grissom nearly drowned.

The natural question, then, is, “If humans can’t even be trusted with a small amount of discretion, what role do they play in the quantitative investment scenario?”

What does all this mean for human endeavour? If we care about getting the best decisions overall, there are many contexts where we need to relegate experts to supporting roles in the decision-making process. We, like the Mercury astronauts, probably can’t tolerate a system that forgoes any possibility of human override, but at a minimum, we should keep track of how experts fare when they wave off the suggestions of the formulas. And we should try to limit our own discretion to places where we do better than machines.

This is in many ways a depressing story for the role of flesh-and-blood people in making decisions. It looks like a world where human discretion is sharply constrained, where humans and their decisions are controlled by the output of machines. What, if anything, in the process of prediction can we humans do better than the machines?

The answer is that we formulate the factors to be tested. We hypothesise. We dream.

The most important thing left to humans is to use our minds and our intuition to guess at what variables should and should not be included in statistical analysis. A statistical regression can tell us the weights to place upon various factors (and simultaneously tell us how, precisely, it was able to estimate these weights). Humans, however, are crucially needed to generate the hypotheses about what causes what. The regressions can test whether there is a causal effect and estimate the size of the causal impact, but somebody (some body, some human) needs to specify the test itself.

So the machines still need us. Humans are crucial not only in deciding what to test, but also in collecting and, at times, creating the data. Radiologists provide important assessments of tissue anomalies that are then plugged into the statistical formulas. The same goes for parole officials who judge subjectively the rehabilitative success of particular inmates. In the new world of database decision-making, these assessments are merely inputs for a formula, and it is statistics – and not experts – that determine how much weight is placed on the assessments.

In investment terms, this means honing the strategy. LSV Asset Management, described by James Montier as being a “fairly normal” quantitative fund (as opposed to being “rocket scientist uber-geeks”) and authors of the landmark Contrarian Investment, Extrapolation and Risk paper, describe the ongoing role of the humans in its funds as follows (emphasis mine):

A proprietary investment model is used to rank a universe of stocks based on a variety of factors we believe to be predictive of future stock returns. The process is continuously refined and enhanced by our investment team although the basic philosophy has never changed – a combination of value and momentum factors.

The blasphemy about momentum aside, the refinement and enhancement process sounds like fun to me.

Read Full Post »

Farukh Farooqi, a long-time supporter of Greenbackd and the founder of Marquis Research, a special situations research and advisory firm (for more on Farukh and his methodology see The Deal in the article “Scavenger Hunter”) provided a guest post on Silicon Storage Technology, Inc (NASDAQ:SSTI) a few weeks back (see the post archive here). At the time of the post, SSTI was trading around $2.70. The stock is up 15.6% to close yesterday at $3.12.

SSTI has announced an amended merger, increasing the bid to $3.00 per share in cash. Here’s the announcement:

Silicon Storage Technology Announces Amended Merger Agreement with Microchip

SST Shareholders to Receive $3.00 Per Share in Cash

SUNNYVALE, Calif., Feb. 23 /PRNewswire-FirstCall/ — SST (Silicon Storage Technology, Inc.) (Nasdaq: SSTI), a leading memory and non-memory products provider for high-volume applications in the digital consumer, networking, wireless communications and Internet computing markets, today announced that it has entered into an amendment to its previously announced merger agreement with Microchip Technology Incorporated (Nasdaq: MCHP) (“Microchip”), a leading provider of microcontroller and analog semiconductors. Pursuant to the amendment, the purchase price for each share of SST common stock has been increased from $2.85 to $3.00 per share in cash. The amended termination fee payable in the circumstances and manner set forth in the merger agreement remains at 3.5% of the total equity consideration.

The amended agreement has been unanimously approved by SST’s Board of Directors acting upon the unanimous recommendation of its independent Strategic Committee. Microchip proposed the revised terms in response to a proposal received by the Strategic Committee from a private equity firm.

As previously announced, the Microchip transaction, which is expected to close in the second calendar quarter of 2010, is conditioned on approval of a majority of the outstanding shares of SST common stock as well as customary closing conditions. The transaction, which will be funded with cash on hand, is not subject to financing.

Houlihan Lokey is serving as the exclusive financial advisor to the Strategic Committee of the SST Board of Directors in connection with the transaction.

Shearman & Sterling LLP is serving as legal advisor to the Strategic Committee of the SST Board of Directors in connection with the transaction.

Cooley Godward Kronish LLP is serving as legal advisor to SST in connection with the transaction.

Wilson Sonsini Goodrich & Rosati, PC is serving as legal advisor to Microchip in connection with the transaction.

Says Farukh:

Subsequent to the MCHP $3/sh cash bid, Cerberus and SST Full Value Committee (actvist group) came up with a competing bid.

This competing bid is very interesting. It offers shareholders to either (1) take $3.00 per share in cash or (2) $2.62 in cash (via a special dividend) and an equity stub, thus giving shareholders the ability to participate in future upside.

I am in the process of valuing the stub but just wanted to make you aware of the development.

This is getting really interesting in that we now have two competing bids and it remains to be seen how, if at all, MCHP counters.

[Full Disclosure: I do not hold SSTI. This is neither a recommendation to buy or sell any securities. All information provided believed to be reliable and presented for information purposes only. Do your own research before investing in any security.]

Read Full Post »

I’ve just finished Ian Ayres’s book Super Crunchers, which I found via Andrew McAfee’s Harvard Business Review blog post, The Future of Decision Making: Less Intuition, More Evidence (discussed in Intuition and the quantitative value investor). Super Crunchers is a more full version of James Montier’s 2006 research report, Painting By Numbers: An Ode To Quant, providing several more anecdotes in support of Montier’s thesis that simple statistical models outperform the best judgements of experts. McAfee discusses one such example in his blog post:

Princeton economist Orley Ashenfleter predicts Bordeaux wine quality (and hence eventual price) using a model he developed that takes into account winter and harvest rainfall and growing season temperature. Massively influential wine critic Robert Parker has called Ashenfleter an “absolute total sham” and his approach “so absurd as to be laughable.” But as Ian Ayres recounts in his great book Supercrunchers, Ashenfelter was right and Parker wrong about the ‘86 vintage, and the way-out-on-a-limb predictions Ashenfelter made about the sublime quality of the ‘89 and ‘90 wines turned out to be spot on.

Ayers provides a number of stories not covered in Montier’s article, from Don Berwick’s “100,000 lives” campaign, Epagogix’s hit movie predictor, Offermatica’s automated web ad serving software, Continental Airlines’s complaint process, and a statistical algorithm for predicting the outcome of Supreme Court decisions. While seemingly unrelated, all are prediction engines based on a quantitative analysis of subjective or qualitative factors.

The Supreme Court decision prediction algorithm is particularly interesting to me, not because I am an ex-lawyer, but because the language of law is language, not often plain, and seemingly irreducible to quantitative analysis. (I believe this is true also of value investment, although numbers play a larger role in that realm, and therefore it lends itself more readily to quantitative analysis.) According to Andrew Martin and Kevin Quinn, the authors of Competing Approaches to Predicting Supreme Court Decision Making, if they are provided with just a few variables concerning the politics of a case, they can predict how the US Supreme Court justices will vote.

Ayers discussed the operation of Martin and Quinn’s Supreme Court decision prediction algorithm in How computers routed the experts:

Analysing historical data from 628 cases previously decided by the nine Supreme Court justices at the time, and taking into account six factors, including the circuit court of origin and the ideological direction of that lower court’s ruling, Martin and Quinn developed simple flowcharts that best predicted the votes of the individual justices. For example, they predicted that if a lower court decision was considered “liberal”, Justice Sandra Day O’Connor would vote to reverse it. If the decision was deemed “conservative”, on the other hand, and came from the 2nd, 3rd or Washington DC circuit courts or the Federal circuit, she would vote to affirm.

Ted Ruger, a law professor at the University of Pennsylvania, approached Martin and Quinn at a seminar and suggested that they test the performance of the algorithm against a group of legal experts:

As the men talked, they decided to run a horse race, to create “a friendly interdisciplinary competition” to compare the accuracy of two different ways to predict the outcome of Supreme Court cases. In one corner stood the predictions of the political scientists and their flow charts, and in the other, the opinions of 83 legal experts – esteemed law professors, practitioners and pundits who would be called upon to predict the justices’ votes for cases in their areas of expertise. The assignment was to predict in advance the votes of the individual justices for every case that was argued in the Supreme Court’s 2002 term.

The outcome?

The experts lost. For every argued case during the 2002 term, the model predicted 75 per cent of the court’s affirm/reverse results correctly, while the legal experts collectively got only 59.1 per cent right. The computer was particularly effective at predicting the crucial swing votes of Justices O’Connor and Anthony Kennedy. The model predicted O’Connor’s vote correctly 70 per cent of the time while the experts’ success rate was only 61 per cent.

Ayers provides a copy of the flowchart in Super Crunchers. Its simplicity is astonishing: there are only 6 decision points, and none of the relate to the content of the matter. Ayers posits the obvious question:

How can it be that an incredibly stripped-down statistical model outpredicted legal experts with access to detailed information about the cases? Is this result just some statistical anomaly? Does it have to do with idiosyncrasies or the arrogance of the legal profession? The short answer is that Ruger’s test is representative of a much wider phenomenon. Since the 1950s, social scientists have been comparing the predictive accuracies of number crunchers and traditional experts – and finding that statistical models consistently outpredict experts. But now that revelation has become a revolution in which companies, investors and policymakers use analysis of huge datasets to discover empirical correlations between seemingly unrelated things.

Perhaps I’m naive, but, for me, one of the really surprising implications arising from Martin and Quinn’s model is that the merits of the legal arguments before the court are largely irrelevant to the decision rendered, and it is Ayres’s “seemingly unrelated things” that affect the outcome most. Ayres puts his finger on the point at issue:

The test would implicate some of the most basic questions of what law is. In 1881, Justice Oliver Wendell Holmes created the idea of legal positivism by announcing: “The life of the law has not been logic; it has been experience.” For him, the law was nothing more than “a prediction of what judges in fact will do”. He rejected the view of Harvard’s dean at the time, Christopher Columbus Langdell, who said that “law is a science, and … all the available materials of that science are contained in printed books”.

Martin and Quinn’s model shows Justice Oliver Wendell Holmes to be right. Law is nothing more than a prediction of what judges will in fact do. How is this relevant to a deep value investing site? Deep value investing is nothing more than a prediction of what companies and stocks will in fact do. If the relationship holds, seemingly unrelated things will affect the performance of stock prices. Part of the raison d’etre of this site is to determine what those things are. To quantify the qualitative factors affecting deep value stock price performance.

Read Full Post »

As I indicated in the first post back for 2010, I’m going to publish some of the more interesting 13D letters filed with the SEC. These are not situations for which I have considered the underlying value proposition, just interesting situations from the perspective of the activist campaign or the Schedule 13D.

Benihana Inc. (NASDAQ:BNHN and BNHNA), the operator of the Behihana teppanyaki restaurants, is the subject of a campaign by founder Rocky Aoki’s widow and eldest children (who control around 32% of the common stock via a trust called “Benihana of Tokyo”) to thwart a proposal by Benihana Inc. to increase the number of its shares of Class A common stock, which would significantly dilute existing shareholders. The campaign seems to have found some support among other large shareholders, notably Blackwell LLC and Coliseum Capital Management LLC, who filed a joint 13D on Wednesday last week. Here is the text of the letter attached to the 13D filed by Blackwell and Coliseum Capital Management:

February 17, 2010

Richard C. Stockinger, Chief Executive Officer & Director
Benihana Inc.
8685 Northwest 53rd Terrace
Miami, Florida 33166

Re: Benihana Inc. – February 22, 2010 Special Meeting of Stockholders

Dear Mr. Stockinger:

On behalf of Coliseum Capital Management, LLC (“Coliseum”), the holder of 9.9% of the Class A Common Stock of Benihana Inc. (the “Company”), I am writing to express concern over the proposed Agreement and Plan of Merger (the “Proposal”) by and between the Company and its wholly-owned subsidiary BHI Mergersub, Inc.

This Proposal (scheduled for shareholder vote on February 22nd) would give the Company the ability to issue 12,500,000 millions shares of Class A Common Stock under their Form S-3 covering the sale of up to $30,000,000 of securities.  Compounded by the anti-dilution provisions contained in the Company’s Convertible Preferred Stock, an equity issuance of this magnitude would be significantly dilutive to existing shareholders.

Thus far, the Company has not provided compelling rationale to affect such a potentially dilutive fundraising; as a result, Coliseum is not supporting the Proposal.

Specific concerns are outlined below:

1. The Company has not provided compelling rationale to affect a potentially dilutive fundraising.

The Company appears to be generating positive cash flow with which to invest in the business and/or amortize debt. Conservatively, the Company appears to produce $25-$35 million of run-rate EBITDA, require approximately $9 million in maintenance capital expenditures and have $4-$8 million of taxes, interest and preferred dividends in total, leaving $12-$18 million of positive free cash flow annually with which to further invest in the business and/or amortize debt.

It would appear that there is sufficient liquidity with which to run the business. The Company disclosed availability of over $11 million for borrowing under the terms of the Wachovia line of credit (“LOC”), as of October 11, 2009. The Company’s positive free cash flow should more than offset the $8 million of scheduled reduction in Availability under the LOC before its March 2011 maturity.

The Company does not appear to be over-levered. In the most recent 10-Q, the Company justified the now-proposed action to authorize the issuance of additional equity by highlighting concerns regarding the March 2011 maturity of the LOC. Before taking into account the positive free cash flow generated between October 2009 and March 2011, the Company is levered through its LOC at less than 1.5x its run-rate EBITDA. In our experience, similar restaurant companies are able to access senior debt in this environment at 1.5x (or more).

2. In combination with the anti-dilution provisions contained in the Convertible Preferred Stock (the “BFC Preferred”), an equity issuance at current levels would be significantly dilutive to existing shareholders (even if completed through a Rights Offering).

Depending upon the issue price of new equity, the BFC Preferred could see a reduction to its conversion price of 15%-25%, and thereby gain an additional 300,000-500,000 shares upon conversion.

3. The process through which the company is evaluating alternatives has been opaque.

We find it troubling that the Company has been unwilling to provide further details relating to Proposal, including rationale, alternatives, process and implications.

4. To date, management has been unwilling to engage in a discussion relating to key questions (below) that any shareholder should have answered before voting in favor of the proposed merger and subsequent fundraising.

What are the Company’s long term plans for each of the concepts?

What assumptions would be reasonable to make over the next several years regarding key revenue and cost drivers, investments in overhead, working capital and capital expenditures, and other sources/uses of cash from operations?

What is the framework for a potential recapitalization?

– What is the rationale for a $30 million capital raise?

– Why raise capital through an equity offering versus other sources?

– What process has been followed (i.e. advisors engaged, alternatives considered, investors/lenders approached)?

– Why change the approach from an amendment of the certificate of incorporation to the current merger proposal?

– What is BFC’s potential role?

– How is the Board dealing with potential conflicts of interest?

It may well be that the Company should undertake a recapitalization, including the issuance of new equity. However, having not been provided information with which to assess the rationale and evaluate the alternatives, Coliseum is not supporting the Proposal.

We look forward to our further discussions.

Very truly yours,
Coliseum Capital Management, LLC
By: Name: Adam L. Gray
Title: Managing Director
BNHN management responded swiftly to the 13D, releasing the following statement on Thursday:

Benihana Inc. Responds to Public Statements of Certain Shareholders Concerning Forthcoming Special Meeting of Shareholders

MIAMI, FLORIDA, February 18, 2010 — Benihana Inc. (NASDAQ: BNHNA; BNHN), operator of the nation’s largest chain of Japanese theme and sushi restaurants, today responded to public statements made by certain shareholders concerning the forthcoming special meeting of shareholders to consider and act upon a proposed merger (the “Merger”) the sole purpose of which is to increase the authorized number of shares of the Company’s Class A Common Stock by 12,500,000.

Richard C. Stockinger, President and Chief Executive Officer, said, “The increase in the authorized shares was one step in a series of actions being taken to ensure that the Company had the flexibility and capability to take advantage of opportunities and or to respond to rapidly changing economic conditions and credit markets. Although the Company’s sales and earnings have been softer than management would have hoped over the past year, we are confident that our recently implemented Renewal Program will help to mitigate or reverse these trends. Still, we remain vulnerable to fluctuations in the larger economy and other risks.”

As previously announced, one result of last year’s sales was the Company’s failure to meet required ratios under its credit agreement with Wachovia Bank, N.A. as at the end of the second quarter of the current fiscal year. That in turn led to amendments to the credit line which will materially reduce the funds available to the Company — what began as a maximum availability of $60 million has been reduced to $40.5 million, will be further reduced to $37.5 million effective July 18, 2010, and further reduced to $32.5 million effective January 2, 2011, with the outstanding balance under the line becoming due and payable in full on March 15, 2011. In addition, the Company expects the judge hearing the Company’s long running litigation with the former minority owners of the Company’s Haru segment to issue a decision in the case shortly which will require the Company to make a payment of at least $3.7 million (the amount offered by the Company) and as much as $10 million (the amount sought by the former minority owners). And while the Company has substantially reduced its capital expenditures allocated to new projects, it continues to have significant capital requirements to maintain its extensive property and equipment and to execute upon its renewal plan.

In the face of these developments, the Board does not believe it would be prudent to do nothing and accordingly has taken a series of steps (all of which have been previously publicly announced) to ensure that the Company is in the strongest possible position to meet any unanticipated challenges it may face.

The Company has detailed in its periodic filings the broad range of operational changes that have been and continue to be made to improve efficiency and increase sales. At the same time, and in support of these operational changes, the Company has taken a series of steps in support of the Company’s financial condition. These included forming a special committee of independent directors (which has retained its own investment bankers and attorneys) to undertake an analysis of the Company’s capital requirements and to evaluate the various alternatives (in the form of both debt and equity) for meeting those requirements. That analysis is ongoing. The Committee has made no recommendation, and the Board has made no decision with respect to the Company’s future capital needs or the best manner of satisfying them. The Company also filed a “generic” registration covering a broad range of alternative financing options (again, both debt and equity) so that, if it determined to do so, it would be in a position to quickly effect a capital raise, and it moved to increase the authorized number of shares of Class A Common Stock for the same reason.

The Board is very much aware of concerns with respect to potential dilution raised by various shareholders and those concerns will certainly be seriously considered in the decision making process. But the Board believes it would be foolhardy not to take the actions it has taken which are designed to give management flexibility in responding to changing circumstances, continue to execute against its renewal plan and have the ability to take advantage of selective growth opportunities as they arise. For these reasons, the Board continues to unanimously urge all shareholders to vote in favor of the proposed Merger.

As to the reasons for the proposed merger (as opposed to a simple amendment to the Certificate of Incorporation): Section 242(b) of the Delaware General Corporation law provides that a class vote is ordinarily required to approve an increase in the authorized number of shares of that class. This would mean that an increase in the Class A stock would require a vote of the holders of Class A stock and an increase in Common stock would require a vote of the holders of Common stock. Delaware law permits a company to “opt out” of this class vote requirement by so providing in its Certificate of Incorporation, and the Company has done just that. Thus, to approve an amendment to increase either the authorized Class A or the authorized Common stock, the Company’s Certificate of Incorporation requires the affirmative vote of a majority of the votes cast by all of the holders of the Company’s common equity. The Certificate of Incorporation was adopted at a time when no other voting securities of the Company were outstanding, and although the Series B Preferred Stock generally votes on an as if converted basis together with the Common Stock, the Certificate of Incorporation does not expressly deal with the voting rights of the Series B Preferred Stock in the context of the “opt out” provision relating to amendments to increase authorized stock. Accordingly, and because the Board believed this was an issue as to which the holders of the Series B Preferred Stock had an interest and as to which they should be entitled to vote, it unanimously elected to proceed under the merger provisions of the Delaware statute rather than the amendment provisions in order to avoid any possible ambiguity.

[Full Disclosure:  No holding. This is neither a recommendation to buy or sell any securities. All information provided believed to be reliable and presented for information purposes only. Do your own research before investing in any security.]

Read Full Post »

Jae Jun at Old School Value has updated his great post back-testing the performance of net current asset value (NCAV) against “net net working capital” (NNWC) by refining the back-test (see NCAV NNWC Backtest Refined). His new back-test increases the rebalancing period to 6 months from 4 weeks, excludes companies with daily volume below 30,000 shares, and introduces the 66% margin of safety to the NCAV stocks (I wasn’t aware that this was missing from yesterday’s back-test, and would explain why the performance of the NCAV stocks was so poor).

Jae Jun’s original back-test compared the performance of NCAV and NNWC stocks over the last three years. He calculated NNWC by discounting the current asset value of stocks in line with Graham’s liquidation value discounts, but excludes the “Fixed and miscellaneous assets” included by Graham. Here’s Jae Jun’s NNWC formula:

NNWC = Cash + (0.75 x Accounts receivables) + (0.5 x  Inventory)

Here’s Graham’s suggested discounts (extracted from Chapter XLIII of Security Analysis: The Classic 1934 Edition “Significance of the Current Asset Value”):

As I noted yesterday, excluding the “Fixed and miscellaneous assets” from the liquidating value calculation makes for an exceptionally austere valuation.

Jae Jun has refined his screening criteria as follows:

  • Volume is greater than 30k
  • NCAV margin of safety included
  • Slippage increased to 1%
  • Rebalance frequency changed to 6 months
  • Test period remains at 3 years

Here are Jae Jun’s back-test results with the new criteria:

For the period 2001 to 2004

For the period 2004 to 2007

For the period 2007 to 2010

It’s an impressive analysis by Jae Jun. Dividing the return into three periods is very helpful. While the returns overall are excellent, there were some serious smash-ups along the way, particularly the February 2007 to March 2009 period. As Klarman and Taleb have both discussed, it demonstrates that your starting date as an investor makes a big difference to your impression of the markets or whatever theory you use to invest. Compare, for example, the experiences of two different NCAV investors, one starting in February 2003 and the second starting in February 2007. The 2003 investor was up 500% in the first year, and had a good claim to possessing some investment genius. The 2007 investor was feeling very ill in March 2009, down around 75% and considering a career in truck driving. Both were following the same strategy, and so really had no basis for either conclusion. I doubt that thought consoles the trucker.

Jae Jun’s Old School Value NNWC NCAV Screen is available here (it’s free).

Read Full Post »

Jae Jun at Old School Value has a great post, NCAV NNWC Screen Strategy Backtest, comparing the performance of net current asset value stocks (NCAV) and “net net working capital” (NNWC) stocks over the last three years. To arrive at NNWC, Jae Jun discounts the current asset value of stocks in line with Graham’s liquidation value discounts, but excludes the “Fixed and miscellaneous assets” included by Graham. Here’s Jae Jun’s NNWC formula:

NNWC = Cash + (0.75 x Accounts receivables) + (0.5 x  Inventory)

Here’s Graham’s suggested discounts (extracted from Chapter XLIII of Security Analysis: The Classic 1934 Edition “Significance of the Current Asset Value”):

Excluding the “Fixed and miscellaneous assets” from the NNWC calculation provides an austere valuation indeed (it makes Graham look like a pie-eyed optimist, which is saying something). The good news is that Jae Jun’s NNWC methodology seems to have performed exceptionally well over the period analyzed.

Jae Jun’s back-test methodology was to create two concentrated portfolios, one of 15 stocks and the other of 10 stocks. He rolled the positions on a four-weekly basis, which may be difficult to do in practice (as Aswath Damodaran pointed out yesterday, many a slip twixt cup and the lip renders a promising back-tested strategy useless in the real world). Here’s the performance of the 15 stock portfolio:

“NNWC Incr.” is “NNWC Increasing,” which Jae Jun describes as follows:

NNWC is positive and the latest NNWC has increased compared to the previous quarter. In this screen, NNWC doesn’t have to be less than current market price. Since the requirement is that NNWC is greater than 0, most large caps automatically fail to make the cut due to the large quantity of intangibles, goodwill and total debt.

Both the NNWC and NNWC Increasing portfolios delivered exceptional returns, up 228% and 183% respectively, while the S&P500 was off 26%. The performance of the NCAV portfolio was a surprise, eeking out just a 5% gain over the period, which is nothing to write home about, but still significantly better than the S&P500.

The 10 stock portfolio’s returns are simply astonishing:

Jae Jun writes:

An original $100 would have become

  • NCAV: $103
  • NNWC: $544
  • NNWC Incr: $503
  • S&P500: $74

That’s a gain of over 400% for NNWC stocks!

Amazing stuff. It would be interesting to see a full academic study on the performance of NNWC stocks, perhaps with holding periods in line with Oppenheimer’s Ben Graham’s Net Current Asset Values: A Performance Update for comparison. You can see Jae Jun’s Old School Value NNWC NCAV Screen here (it’s free). He’s also provided a list of the top 10 NNWC stocks and top 10 stocks with increasing NNWC in the NCAV NNWC Screen Strategy Backtest post.

Buy my book The Acquirer’s Multiple: How the Billionaire Contrarians of Deep Value Beat the Market from on Kindlepaperback, and Audible.

Here’s your book for the fall if you’re on global Wall Street. Tobias Carlisle has hit a home run deep over left field. It’s an incredibly smart, dense, 213 pages on how to not lose money in the market. It’s your Autumn smart read. –Tom Keene, Bloomberg’s Editor-At-Large, Bloomberg Surveillance, September 9, 2014.

Click here if you’d like to read more on The Acquirer’s Multiple, or connect with me on Twitter, LinkedIn or Facebook. Check out the best deep value stocks in the largest 1000 names for free on the deep value stock screener at The Acquirer’s Multiple®.


Read Full Post »

Aswath Damodaran, a Professor of Finance at the Stern School of Business, has an interesting post on his blog Musings on Markets, Transaction costs and beating the market. Damodaran’s thesis is that transaction costs – broadly defined to include brokerage commissions, spread and the “price impact” of trading (which I believe is an important issue for some strategies) – foil in the real world investment strategies that beat the market in back-tests. He argues that transaction costs are also the reason why the “average active portfolio manager” underperforms the index by about 1% to 1.5%. I agree with Damodaran. The long-term, successful practical application of any investment strategy is difficult, and is made more so by all of the frictional costs that the investor encounters. That said, I see no reason why a systematic application of some value-based investment strategies should not outperform the market even after taking into account those transaction costs and taxes. That’s a bold statement, and requires in support the production of equally extraordinary evidence, which I do not possess. Regardless, here’s my take on Damodaran’s article.

First, Damodaran makes the point that even well-researched, back-tested, market-beating strategies underperform in practice:

Most of these beat-the-market approaches, and especially the well researched ones, are backed up by evidence from back testing, where the approach is tried on historical data and found to deliver “excess returns”. Ergo, a money making strategy is born.. books are written.. mutual funds are created.

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index. That number has remained surprisingly stable over the last four decades and has persisted through bull and bear markets. Worse, this under performance cannot be attributed to “bad” portfolio mangers who drag the average down, since there is very little consistency in performance. Winners this year are just as likely to be losers next year…

Then he explains why he believes market-beating strategies that work on paper fail in the real world. The answer? Transaction costs:

So, why do portfolios that perform so well in back testing not deliver results in real time? The biggest culprit, in my view, is transactions costs, defined to include not only the commission and brokerage costs but two more significant costs – the spread between the bid price and the ask price and the price impact you have when you trade. The strategies that seem to do best on paper also expose you the most to these costs. Consider one simple example: Stocks that have lost the most of the previous year seem to generate much better returns over the following five years than stocks have done the best. This “loser” stock strategy was first listed in the academic literature in the mid-1980s and greeted as vindication by contrarians. Later analysis showed, though, that almost all of the excess returns from this strategy come from stocks that have dropped to below a dollar (the biggest losing stocks are often susceptible to this problem). The bid-ask spread on these stocks, as a percentage of the stock price, is huge (20-25%) and the illiquidity can also cause large price changes on trading – you push the price up as you buy and the price down as you sell. Removing these stocks from your portfolio eliminated almost all of the excess returns.

In support of his thesis, Damodaran gives the example of Value Line and its mutual funds:

In perhaps the most telling example of slips between the cup and lip, Value Line, the data and investment services firm, got great press when Fischer Black, noted academic and believer in efficient markets, did a study where he indicated that buying stocks ranked 1 in the Value Line timeliness indicator would beat the market. Value Line, believing its own hype, decided to start mutual funds that would invest in its best ranking stocks. During the years that the funds have been in existence, the actual funds have underperformed the Value Line hypothetical fund (which is what it uses for its graphs) significantly.

Damodaran’s argument is particularly interesting to me in the context of my recent series of posts on quantitative value investing. For those new to the site, my argument is that a systematic application of the deep value methodologies like Benjamin Graham’s liquidation strategy (for example, as applied in Oppenheimer’s Ben Graham’s Net Current Asset Values: A Performance Update) or a low price-to-book strategy (as described in Lakonishok, Shleifer, and Vishny’s Contrarian Investment, Extrapolation and Risk) can lead to exceptional long-term investment returns in a fund.

When Damodaran refers to “the price impact you have when you trade” he highlights a very important reason why a strategy in practice will underperform its theoretical results. As I noted in my conclusion to Intuition and the quantitative value investor:

The challenge is making the sample mean (the portfolio return) match the population mean (the screen). As we will see, the real world application of the quantitative approach is not as straight-forward as we might initially expect because the act of buying (selling) interferes with the model.

A strategy in practice will underperform its theoretical results for two reasons:

  1. The strategy in back test doesn’t have to deal with what I call the “friction” it encounters in the real world. I define “friction” as brokerage, spread and tax, all of which take a mighty bite out of performance. These are two of Damodaran’s transaction costs and another – tax. Arguably spread is the most difficult to prospectively factor into a model. One can account for brokerage and tax in the model, but spread is always going to be unknowable before the event.
  2. The act of buying or selling interferes with the market (I think it’s a Schrodinger’s cat-like paradox, but then I don’t understand quantum superpositions). This is best illustrated at the micro end of the market. Those of us who traffic in the Graham sub-liquidation value boat trash learn to live with wide spreads and a lack of liquidity. We use limit orders and sit on the bid (ask) until we get filled. No-one is buying (selling) “at the market,” because, for the most part, there ain’t no market until we get on the bid (ask). When we do manage to consummate a transaction, we’re affecting the price. We’re doing our little part to return it to its underlying value, such is the wonderful phenomenon of value investing mean reversion in action. The back-test / paper-traded strategy doesn’t have to account for the effect its own buying or selling has on the market, and so should perform better in theory than it does in practice.

If ever the real-world application of an investment strategy should underperform its theoretical results, Graham liquidation value is where I would expect it to happen. The wide spreads and lack of liquidity mean that even a small, individual investor will likely underperform the back-test results. Note, however, that it does not necessarily follow that the Graham liquidation value strategy will underperform the market, just the model. I continue to believe that a systematic application of Graham’s strategy will beat the market in practice.

I have one small quibble with Damodaran’s otherwise well-argued piece. He writes:

The average active portfolio manager, who I assume is the primary user of these can’t-miss strategies does not beat the market and delivers about 1-1.5% less than the index.

There’s a little rhetorical sleight of hand in this statement (which I’m guilty of on occasion in my haste to get a post finished). Evidence that the “average active portfolio manager” does not beat the market is not evidence that these strategies don’t beat the market in practice. I’d argue that the “average active portfolio manager” is not using these strategies. I don’t really know what they’re doing, but I’d guess the institutional imperative calls for them to hug the index and over- or under-weight particular industries, sectors or companies on the basis of a story (“Green is the new black,” “China will consume us back to the boom,” “house prices never go down,” “the new dot com economy will destroy the old bricks-and-mortar economy” etc). Yes, most portfolio managers underperform the index in the order of 1% to 1.5%, but I think they do so because they are, in essence, buying the index and extracting from the index’s performance their own fees and other transaction costs. They are not using the various strategies identified in the academic or popular literature. That small point aside, I think the remainder of the article is excellent.

In conclusion, I agree with Damodaran’s thesis that transaction costs in the form of brokerage commissions, spread and the “price impact” of trading make many apparently successful back-tested strategies unusable in the real world. I believe that the results of any strategy’s application in practice will underperform its theoretical results because of friction and the paradox of Schrodinger’s cat’s brokerage account. That said, I still see no reason why a systematic application of Graham’s liquidation value strategy or LSV’s low price-to-book value strategy can’t outperform the market even after taking into account these frictional costs and, in particular, wide spreads.

Hat tip to the Ox.

Read Full Post »

Speculating about the level of the market is a pastime for fools and knaves, as I have amply demonstrated in the past (or, as Edgar Allen Poe would have it, “I have great faith in fools — self-confidence my friends will call it.”). In April last year I ran a post, Three ghosts of bear markets past, on DShort.com’s series of charts showing how the current bear market compared to three other bear markets: the Dow Crash of 1929 (1929-1932), the Oil Crisis (1973-1974) and the Tech Wreck (2000-2002). At that time the market was up 24.4% from its low, and I said,

Anyone who thinks that the bounce means that the current bear market is over would do well to study the behavior of bear markets past (quite aside from simply looking at the plethora of data about the economy in general, the cyclical nature of long-run corporate earnings and price-earnings multiples over the same cycle). They might find it a sobering experience.

Now the market is up almost 60% from its low, which just goes to show what little I know:

While none of us are actually investing with regard to the level of the market – we’re all analyzing individual securities – I still find it interesting to see how the present aggregate experience compares to the experience in other epochs in investing. One other chart by DShort.com worth seeing is the “Three Mega-Bears” chart, which treats the recent decline as part of the decline from the “Tech Wreck” on the basis that the peak pre-August 2007 did not exceed the peak pre-Tech Wreck after adjusting for inflation:

It’s interesting for me because it compares the Dow Crash of 1929 (from which Graham forged his “Net Net” strategy) to the present experience in the US and Japan (both of which offer the most Net-Net opportunities globally). Where are we going from here? Que sais-je? The one thing I do know is that 10 more years of a down or sideways market is, unfortunately, a real possibility.

Read Full Post »

President’s Day

Have a good break. See you tomorrow.

Read Full Post »

Older Posts »

%d bloggers like this: