Abstract

E MPIRICAL results reported in economics journals are selected from a large set of estimated models. Journals, through their editorial policies, engage in some selection, which in turn stimulates extensive model searching and prescreening by prospective authors. Since this process is well known to professional readers, the reported results are widely regarded to overstate the precision of the estimates, and probably to distort them as well. As a consequence, statistical analyses are either greatly discounted or completely ignored. This unfortunate equilibrium in the market for information is a result of the current econometric technology, which generates inferences only if a precisely defined model were available, and which can be used to explore the sensitivity of inferences only to discrete changes in assumptions. The reporting of a complete sensitivity analysis is ruled out therefore first, because the econometric theory which takes models as given would be rendered explicitly inadequate if the sensitivity analysis were reported, and, second, because the econometric technology, if used to explore sensitivity issues, would generate vast numbers of estimated models that journals are rightfully reluctant to print. It is the purpose of this article to discuss an alternative econometric technology that could increase the value of our profession's limited data resources. The basic assumption underlying this technology is that no econometric model can be taken as given. Because there are many models which could serve as a basis for a data analysis, there are many conflicting inferences which could be drawn from a given data set. If this fact of life is acknowledged, it deflects econometric theory from the traditional task of identifying the unique inferences implied by a specific model to the task of determining the range of inferences generated by a range of models. We propose that researchers be given the task of identifying interesting families of alternative models and be expected to summarize the range of inferences which are implied by each of the families. When a range of inferences is small enough to be useful and when the corresponding family of models is broad enough to be believable, we may conclude that these data yield useful information. When the range of inferences is too wide to be useful, and when the corresponding family of models is so narrow that it cannot credibly be reduced, then we must conclude that inferences from these data are too fragile to be useful. This contrasts greatly with the reporting schemes currently used by individuals. As a profession, however, we do suspend judgment on econometric results until they hold up to inspections by other researchers using other models. The advocacy process we use to accumulate professional opinion is therefore aimed in the same direction as our proposals, but the path we recommend is much more direct and the outcome is much more clearly stated. A simple introduction to this alternative econometric technology is given in section I of this paper. In writing this section we have attempted to communicate the main ideas as concisely as possible. As a consequence, there is no reference to any sophisticated statistical theory and especially no mention of the Reverend Thomas Bayes. For a more complete statement as well as theological fanfare, consult Leamer (1978). The proper test of our proposals is whether they are useful in practice. We believe that researchers will find them to be efficient tools for discovering the information in data sets and for communicating findings to the consuming public. In an effort to make clear the value of these techniques we present two examples in section II. These methods are not without their own problems, the most serious of which is their concentration on the point estimation problem and their neglect of hypothesis testing or interval estimation. The basic approach to studying and reporting the fragility of estimates which we describe in this paper can be readily extended to studying and reporting the fragility of t-values, though computational difficulties do arise. Received for publication June 15, 1981. Revision accepted for publication August 2, 1982. * University of California, Los Angeles, and Harvard University, respectively. Research supported by NSF Grant SOC78-09477. Comments of the referees have helped to improve both the content and the exposition. Thomas Wolff is thanked for able research assistance.

Keywords

FragilityEconometricsRegressionRegression analysisEconomicsStatisticsMathematics

Related Publications

Publication Info

Year
1983
Type
article
Volume
65
Issue
2
Pages
306-306
Citations
326
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

326
OpenAlex

Cite This

Edward E. Leamer, Herman B. Leonard (1983). Reporting the Fragility of Regression Estimates. The Review of Economics and Statistics , 65 (2) , 306-306. https://doi.org/10.2307/1924497

Identifiers

DOI
10.2307/1924497