Hedonic property value regression is a leading technique for estimating how much consumers are willing to pay for nonmarket amenities. The prevailing style of estimation has evolved in recent years to incorporate insights from the “credibility revolution” in applied economics, with high expectations for data quality and econometric transparency. At the same time, recent research has improved our understanding of how parameters identified by quasi-experimental designs relate to welfare measures. This post describes an article summarizing modern best practices for developing credible hedonic research designs and valid welfare interpretations of the estimates. I wrote the article together with Kelly Bishop, Spencer Banzhaf, Kevin Boyle, Kathrine von Gravenitz, Jaren Pope, Kerry Smith, and Christopher Timmins. It was published in the Summer 2020 issue of the Review of Environmental Economics and Policy as part of a symposium on best practices for using revealed preference methods for nonmarket valuation of environmental quality. A 20-minute video summary is posted here.
There have been thousands of hedonic property value studies since the model was formalized in the 1970s and the pace has accelerated due to advances in data, econometrics, and computing power. The model’s enduring popularity is easy to understand. It starts with an intuitive premise that is economically plausible and empirically tractable. The model envisions buyers choosing properties based on housing attributes (e.g., indoor space, bedrooms, bathrooms) and on location-specific amenities (e.g., air quality, park proximity, education, flood risk). In the absence of market frictions, spatial variation in amenities can be expected to be capitalized into housing prices. When buyers face the resulting menu of price-attribute-amenity pairings, their purchase decisions can reveal their marginal willingness to pay (MWTP) for each of the amenities. In principle, estimating MWTP is straightforward. In practice, several key modeling decisions must be made. These include defining the market, choosing appropriate measures of prices and amenities, selecting an econometric specification, and developing a research design that isolates exogenous variation in the amenity of interest.
Defining the Market. Best practices in hedonic estimation start with defining the relevant housing market in a way that satisfies the “law of one price function”. This means that identical houses will sell for the same price throughout that market. The precise spatial and temporal boundaries that satisfy this condition may vary across space and over time as information, institutions, and moving costs change. One common practice is to define the market as a single metro area over a few years. An alternative is to pool data over larger areas and longer periods, and to model the hedonic price function as evolving over space and time.

Modernizing Regulatory Review, a Presidential memorandum published January 20, 2021, serves as a preface to the regulatory policies of the Biden Administration. As such, the memorandum complements three executive orders (E.O 13993: Revocation of Certain Executive Orders Concerning Federal Regulation; E.O. 13990: Protecting Public Health and the Environment and Restoring Science to Tackle the Climate Crisis; and E.O. 13979: Ensuring Democratic Accountability in Agency Rulemaking) that collectively rescind the previous administration’s regulatory policy. The regulatory policy foreshadowed in the memorandum and other documents, however, goes beyond rescinding the Trump administration’s program or restoring previous regulatory regimes.
The revelations of the
Benefit-cost analysis, as usually practiced, sums the monetary values of effects on individuals. It can be justified by the potential compensation test: if the total monetary gain to the “winners” (those who gain from a policy) exceeds the total monetary loss to the “losers” (those who are harmed), the “winners” could (in principle) pay compensation to the “losers” so that everyone would judge herself better off with the combined policy and compensation than without. The idea is that by summing the net benefits across individuals, BCA measures “efficiency” or the size of the social pie, and that questions about distribution can be evaluated separately. Logically, policies that expand the social pie permit everyone to have a bigger slice; a smaller pie guarantees that at least some people get a smaller slice. 




Epidemic models often generate new terms or phrases to describe their behavior. Two of these, “herd immunity” and ”flattening the curve”, have been widely misunderstood and misused in the COVID epidemic by media, policy makers and even epidemiologists, who should know better. They have been held up as goals of public health management, but there is a deep down-side of each. Achieving herd immunity is just reducing the number of susceptible hosts for the pathogen to the point that the chance of an infected individual contacting a susceptible to transmit the pathogen isN too small to support the persistence of the disease. This is achieved at the cost of terrible human suffering or by vaccination that is measurably costly and it is difficult to achieve adequately high vaccination rates. Flattening the curve just trades acute pain for chronic pain. Reducing peak suffering and health care cost is replaced by an extended period for each, with only very small reduction in summed morbidity and cost. It can allow more time for the evolution of new strains that might be less sensitive to established therapies or vaccines.
Thomas Schelling suggested in his book Micromotives and Macrobehavior that cost-benefit choices by individuals can explain the emergence of population-level phenomena in a game theory model. We can apply Schelling’s binary choice framework to social distancing.
Past studies of Ebola, HIV, dengue, and Zika by infectious disease epidemiologists provide a road map for the use of outbreak and contact tracing data to estimate transmission parameters for application in mathematical models. There are several primary goals for modeling efforts in this context.
Public health responses to epidemics have been developed and refined over more than five centuries of experience. In recent decades, thirteen new zoonotic infections that affect humans, from Ebola in 1976 to Middle East Respiratory Syndrome in 2012, have emerged. Based on these experiences, the CDC Field Epidemiology Manual lays out a clear set of steps for outbreak investigation and response, including the importance of communicating clearly with the public. In countries and regions where time-tested public health tools based on these steps have been used, COVID is largely under control.