TechNews Pictorial PriceGrabber Video Tue Nov 26 05:49:23 2024

0


What Will The Impact Of Machine Learning Be On Economics?
Source: Susan Athey


MAY 05: Susan Athey speaks at TechCrunch Disrupt NY 2014 – Day 1 on May 5, 2014 in New York City. (Photo by Brian Ach/Getty Images for TechCrunch)

Answer by Susan Athey, Economics of Technology Professor, Stanford GSB; Boards: Expedia, Ripple, on Quora:

The short answer is that I think it will have an enormous impact; in the early days, as used “off the shelf,” but in the longer run econometricians will modify the methods and tailor them so that they meet the needs of social scientists primarily interested in conducting inference about causal effects and estimating the impact of counterfactual policies (that is, things that haven’t been tried yet, or what would have happened if a different policy had been used). Examples of questions economists often study are things like the effects of changing prices, or introducing price discrimination, or changing the minimum wage, or evaluating advertising effectiveness. We want to estimate what would happen in the event of a change, or what would have happened if the change hadn’t taken place.

As evidence of the impact already, Guido Imbens and I attracted over 250 economics professors to an NBER session on a Saturday afternoon last summer, where we covered machine learning for economists, and everywhere I present about this topic to economists, I attract large crowds. I think similar things are true for the small set of other economists working in this area. There were hundreds of people in a session on big data at the AEA meetings a few weeks ago.

Machine learning is a broad term; I’m going to use it fairly narrowly here. Within machine learning, there are two branches, supervised and unsupervised machine learning. Supervised machine learning typically entails using a set of “features” or “covariates” (x’s) to predict an outcome (y). There are a variety of ML methods, such as LASSO (see Victor Chernozhukov (MIT) and coauthors who have brought this into economics), random forest, regression trees, support vector machines, etc. One common feature of many ML methods is that they use cross-validation to select model complexity; that is, they repeatedly estimate a model on part of the data and then test it on another part, and they find the “complexity penalty term” that fits the data best in terms of mean-squared error of the prediction (the squared difference between the model prediction and the actual outcome). In much of cross-sectional econometrics, the tradition has been that the researcher specifies one model and then checks “robustness” by looking at 2 or 3 alternatives.    I believe that regularization and systematic model selection will become a standard part of empirical practice in economics as we more frequently encounter datasets with many covariates, and also as we see the advantages of being systematic about model selection.


Sendhil Mullainathan (Harvard) and Jon Kleinberg with a number of coauthors have argued that there is a set of problems where off-the-shelf ML methods for prediction are the key part of important policy and decision problems. They use examples like deciding whether to do a hip replacement operation for an elderly patient; if you can predict based on their individual characteristics that they will die within a year, then you should not do the operation. Many Americans are incarcerated while awaiting trial; if you can predict who will show up for court, you can let more out on bail. ML algorithms are currently in use for this decision in a number of jurisdictions. Goel, Rao, and Shroff presented a paper at the AEA meetings a few weeks ago using ML methods to examine stop-and-frisk laws.    See also the interesting work using ML prediction methods in the session I discussed on “Predictive Cities”: 2016 ASSA Preliminary Program where we see ML used in the public sector.

Despite these fascinating examples, in general ML prediction models are built on a premise that is fundamentally at odds with a lot of social science work on causal inference. The foundation of supervised ML methods is that model selection (cross-validation) is carried out to optimize goodness of fit on a test sample. A model is good if and only if it predicts well. Yet, a cornerstone of introductory econometrics is that prediction is not causal inference, and indeed a classic economic example is that in many economic datasets, price and quantity are positively correlated. Firms set prices higher in high-income cities where consumers buy more; they raise prices in anticipation of times of peak demand. A large body of econometric research seeks to REDUCE the goodness of fit of a model in order to estimate the causal effect of, say, changing prices. If prices and quantities are positively correlated in the data, any model that estimates the true causal effect (quantity goes down if you change price) will not do as good a job fitting the data. The place where the econometric model with a causal estimate would do better is at fitting what happens if the firm actually changes prices at a given point in time — at doing counterfactual predictions when the world changes. Techniques like instrumental variables seek to use only some of the information that is in the data – the “clean” or “exogenous” or “experiment-like” variation in price — sacrificing predictive accuracy in the current environment to learn about a more fundamental relationship that will help make decisions about changing price. This type of model has not received almost any attention in ML.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |