Jeffrey Pontiff—holder of the James F. Cleary Chair in Finance at the Carroll School—started investing when he was 13 years old. Every day, on his way home from school, he’d stop off at the E.F. Hutton office in Erie, Pennsylvania, his hometown, to check the value of his stocks. “I lost all my money on my first investment,” he says.
That setback didn’t dim Pontiff’s interest. He kept investing—he even owned Apple in its early days—and studying the market. His fascination led him not only to graduate school in finance at the University of Rochester and a faculty position at Boston College but also to a prestigious award last year for his research.
The American Finance Association named his paper “Does Academic Research Destroy Stock Return Predictability?” the winner of the 2016 Amundi Smith Breeden Prize. The prize, accompanied by $25,000, recognizes the top paper published in the Journal of Finance, a leading publication in the field. The paper was written with David McLean, a 2006 graduate of the Carroll School’s doctoral program in finance and a professor at Georgetown University.
The paper is very much the combination of Pontiff’s lifelong fascination with investing and his academic expertise. It examines how the stock market responds to potential investing strategies identified by fellow academics. The paper finds that investors appear to learn about the strategies through the academic publication process and then trade on them. Thus the “excess returns”—that is, the returns above the market’s average return—identified by the researchers decay after papers are published (but don’t disappear).
Put differently, Pontiff and McLean show that investors care about academic research. They’re even willing to bet their money on it.
“With this paper, I felt like I was able, for the first time in my life, to give people an answer to a fundamental question that we academics never really knew the answer to,” Pontiff says.
Over the last 40 years, finance researchers have published dozens of papers claiming to pinpoint predictors of the returns of one group of stocks or another. These range from the well known, like the tendency of small-capitalization stocks to outperform the market on average, to the more obscure, like the tendency of companies with good corporate governance to do so. But every paper was necessarily limited in its conclusions: the authors, in effect, took a snapshot of the stock market and said, “Here’s what we saw then.”
Pontiff and McLean decided to dig deeper and see whether the outperformance endured beyond the periods defined in the original papers. That question matters because it illuminates Pontiff’s underlying, broader question: Does academic research influence the real world of investing?
The professors found that it did. They examined 97 return predictors identified in 79 papers. That big slug of data gave them what researchers call “statistical power”; it let them draw more definitive conclusions and screen out the effect of random variation.
“It’s like asking whether cigarettes cause cancer,” Pontiff says. “If somebody says, my grandpa smoked and he died, that doesn’t really tell you anything. But if you look at a million smokers, you can draw strong conclusions.”
Pontiff and McLean analyzed the return predictors in three steps. To understand how, it helps to know a little about academic publishing. A typical academic publication takes several years. Initially, the authors will offer up a working paper, which they’ll post publicly on a website like SSRN. They’ll then present the paper to fellow scholars in workshops, receive feedback, and revise it. They’ll also submit the paper to journals for possible publication and receive more suggestions. A publication eventually results. In the meantime, the findings are publicly available to anyone who’s interested—including, potentially, investors.
The lengthy publication process let Pontiff and McLean create their three-part test. First, they replicated the original finding. Then they tested the return predictor in the period between the paper’s first public appearance and its publication in a journal. Finally, they tested it again in a period after publication.
“What we see is, after a paper is published, if it says a company’s price-to-book [ratio] predicts stock returns, that continues to predict stock returns in the future,” Pontiff says.
What happens, he explains, is that investors see the information in the academic publication and trade on it. As more of them try to exploit it, that bids away the excess return that the researchers originally identified.
It’s an example of the constant push and pull of the stock market. If investors see something as undervalued—and thus likely to provide a future return—they jump in. If, for example, a researcher reports that companies with good governance outperform the market on average, investors read that and buy stock in those companies. As more of them do so, the prices rise, and the undervaluation dissipates.
Pontiff says his paper answers criticisms that he sometimes hears of finance scholarship. “There are these two extreme views: One is that what we do is pure history, with no implications for the future. The other is that what we do has no useful message for people in the market. We answer both of those.”
That is, he and his coauthor show that return predictors identified by fellow professors do endure and investors do pay attention. In the end, the academic research doesn’t “destroy” stock return predictability, but it does diminish the excess returns over time.
“I didn’t know what David and I would find when we started,” Pontiff adds. “But I thought we were blessed in that, no matter what we found, unless it was just a bunch of statistical noise, people would care.”
Fellow finance scholars not only cared—they recognized the findings as one of the most important recent contributions to their field.
Tim Gray is a freelance writer and writing instructor at the Carroll School.