Marginal Revolution

Small Steps Toward A Much Better World

 

Is financial economics still economics?
2026-04-01 04:03 UTC by Tyler Cowen

That all sounded wonderful, and that core model and its offshoots dominated financial research for decades. The problem, however, was that it wasn’t true, or at least it wasn’t nearly as true as we had thought and hoped. When financial economists refined the models with more complete specifications, it turned out Beta didn’t predict stock returns much at all. Eugene Fama and Kenneth French delivered one of the final blows to earlier approaches with a 1992 paper that showed Beta didn’t have explanatory power over expected returns at all. Since Fama himself was one of the original architects of CAPM-like reasoning, and French also was a renowned finance economist, these revisions to the model were credible. For all its original promise, marginalism, and the concomitant notion of diminishing marginal utility, no longer seemed to help explain asset returns.

Under one plausible account of intellectual history, you can date the decline of marginalism to that 1992 paper. In the most rigorous, data-oriented, and highest-paying field of economics, namely finance, marginalist constructs had every chance to succeed. In fact, they ran the board for several decades. But over time they failed. In the most prestigious field of economics, marginalism has been in full retreat for over 30 years, and it shows no signs of making a comeback.

We already know that financial practice is dominated by the (non-economist) quants. But how about financial economics research, the parts that are still done by economists? What direction is that work moving in?

I was struck by a 2024 paper published in the Journal of Financial Economics, one of the two leading journals of financial economics (Journal of Finance is the other). The authors are Scott Murray, Yusen Xia, and Houping Xiao, and the title is “Charting by Machines.” The core result is pretty simple, and best expressed in the well-written abstract:

“We test the efficient market hypothesis by using machine learning to forecast stock returns from historical performance. These forecasts strongly predict the cross-section of future stock returns. The predictive power holds in most subperiods and is strong among the largest 500 stocks. The forecasting function has important nonlinearities and interactions, is remarkably stable through time, and captures effects distinct from momentum, reversal and extant technical signals. These findings question the efficient market hypothesis and indicate that technical analysis and charting have merit. We also demonstrate that machine learning models that perform well in optimization continue to perform well out-of-sample.” Murray, Xia, and Xiao (2024, p. 1). Or consider the new paper Borri, Chetverikov, Liu, and Tsyvinski (2024). They propose a new non-linear, single-factor asset pricing model. In the abstract: “Most known finance and macro factors become insignificant controlling for our single-factor.” Yet you won’t find traditional economic variables discussed in this paper, it is all about the math, in particular a representation of the Kolmogorov-Arnold representation theorem.

In other words, the successful approach to predicting returns is giving up on traditional portfolio theory and using the “theory-less” technique of machine learning. Although this is published in the Journal of Financial Economics, in some significant sense it is not economic reasoning at all. It is calculation, combined with expertise in math and computer science. The modeling is not economic modeling in a manner that has ties to marginalism or standard intuitive microeconomic theory. And the work is predicting excess returns in a pretty robust and successful way…

There is a recent working paper which is perhaps more striking yet, by Antoine Didisheim, Shikun (Barry) Ke, Bryan T. Kelly, and Semyon Malamud. They pick up from Arbitrage Pricing Theory (APT), a well-established idea from financial economics. APT typically looks for “factors” in the data which predict excess returns, and a traditional APT model might have found five or six such factors. Are “inflation” or perhaps “the term structure of interest rates” useful factors? Well, that can be debated, but if so, those results sound pretty intuitive. But those intuitions seem to be disappearing. In a paper by these authors, they apply machine learning methods to look for more factors. As we know, machine learning is very good at finding non-obvious relationships in the data. The largest model they built has 360,000 (!) factors, and it reduces pricing errors by 54.8 percent relative to the classic six-factor model from Fama and French. Bravo to the authors, but what kinds of intuitions do you think possibly can be supported by those 360,000 factors?

That is from my new The Marginal Revolution: Rise and Decline, and the Pending Revolution in AI.

The post Is financial economics still economics? appeared first on Marginal REVOLUTION.


 

Content mobilized by FeedBlitz RSS Services, the premium FeedBurner alternative.