My post on March 13 summarizing an upcoming paper I’m writing on Airbnb and gentrification in New York with Alexander Weisler (a former graduate student of mine) caused a bit of a stir, including some pushback from Airbnb’s CEO. So, in the interests of transparency, I wanted to discuss the methodology I used to generate my conclusions in more detail. This post will allow anyone to replicate, confirm or challenge the findings, by clarifying both the analytical procedure and the various assumptions I made along the way.
To begin with, I carried out my analysis using proprietary data provided by the consultancy Airdna. This is a firm that specializes in scraping the publicly available Airbnb website and aggregating the data they find, and it is one of the two widely relied upon third-party estimates of Airbnb’s activities. (The other is Murray Cox’s excellent open-data effort Inside Airbnb; something that is on my near-term agenda is to systematically compare the estimates generated by Airdna and Inside Airbnb, as a means of “triangulating” the reliability of third-party data.)
It would be much better to do this analysis with official, accurate data from Airbnb, but they are extremely secretive about their data, even when faced with legal requirements, and when they have released data, observers have concluded that they’ve done so in a misleading fashion. So myself and other researchers have to settle for third-party data.
The data I used is the complete property file for all listings in the entire New York metropolitan statistical area as of September, 2016. This includes many listings which are presumably now defunct (e.g. where the listing’s specific web address was last successfully scraped in 2014), as well as many listings which have only recently been added to Airbnb and haven’t yet generated much or any activity. The file contains 141,657 listings, 110,478 of which have had verified activity since September 2015—one year before the end date of the dataset.
Where I set the threshold for an “active” listing could potentially change the results quite a bit (and probably explains the difference between my estimates and those from Inside Airbnb, as I discuss below), but what I found was that most of the old listings had very little activity, so the numbers of revenue-generating listings didn’t change much. For instance, by setting the threshold at September 2015, I excluded 31,179 listings, but only about 501 (1.7%) of these excluded listings had any revenue listed at all, and only 321 (1%) had more than $1000 in estimated annual revenue.
The entry for each listing provides a large assortment of metadata. The metadata I focused on was:
- The listing type: private room, shared room, or whole-unit
- Occupancy details: how many days it was booked, listed and blocked
- Price and revenue details: the average daily rate and the total estimated annual revenue
- The location of the listing: city, zipcode, neighbourhood, and latitude and longitude coordinates
But I also had access to other information about each listing, including:
- Unit details: the number of bedrooms and bathrooms, and the maximum number of guests
- Rental policies: the cancellation policy and security deposit, the cleaning fee, check-in and check-out times, and the like
- Other details: the listing URL, the number of photos included in the listing, etc.
I imported this data into ArcGIS, and turned each listing into a point on the map, using its latitude and longitude coordinates. Here’s the distribution of all points across the New York metropolitan region (New York City is shown in dark grey):
Because of how many points there are, it’s not easy to get a good sense of their distribution, but a large majority of listings across the region are concentrated in a very small space. Out of the 110,478 “active” listings:
- 91,811 (83.1%) are in New York City
- 4,126 (3.7%) of the listings are on the eastern part of Long Island, in and around the Hamptons, where there are large numbers of vacation rentals in the summertime
- 94,786 (85.8%) of the listings are in the area shown in my maps in my previous blogpost (Manhattan, Northern Brooklyn, parts of Queens and the New Jersey side of the Hudson River)
There’s a certain amount of uncertainty about these numbers, because they were captured at different points of time, and the totals here are probably closer to a maximal than a minimal estimate of Airbnb activity, since it’s hard to know when to disqualify an old listing than it is to know when to start counting a new listing.
Estimating Airbnb activity across the region
In order to get a better sense of the distribution of listings across the region, and in order to perform the subsequent analysis with data from the census and the American Community Survey, I aggregated the listings at the census-tract scale (census tracts are roughly equivalent to small neighbourhoods, with approximately 4,000 residents). I did this using a polygon-to-point spatial join operation in ArcGIS, identifying for each of the 140,000 listings the census tract it lies within:
This worked well, but there were some problems that needed to be sorted out, because the latitude and longitude coordinates taken from listings on Airbnb’s website are randomized by up to 150 m, in order to protect users’ anonymity. Generally this randomization isn’t a problem, because most points are simply shifted a little bit within the same census tract, and for every point that gets randomly shifted across the line from census tract A to B, there’s probably another point that gets randomly shifted from B to A. For aggregate-level analysis of the kind I’m doing, this is acceptable random error. But occasionally points got shifted into census tracts that don’t have any population in them. For example, here is a detail view of the area around Central Park in Manhattan:
There are a handful of points in the middle of the park, which doesn’t make sense. Similarly, there are some points which ended up in census tracts with fewer than 50 total units of housing according to the American Community Survey (e.g. prisons and university dormitories which have a lot of people living there but close to no owner- or renter-occupied housing). So I designed a method for using information about the distribution of housing units on a block-by-block scale to probabilistically estimate which census tract a given listing is likely to have been drawn from. (This is the subject of a forthcoming GIS methods paper.)
Once I sorted these details out, I was able to aggregate all the points across census tracts. For the subsequent analysis, I also isolated various subsets of the listings. In total, I recorded the following subsets: (The main number is with an “active” cut-off of Sep. 2015. In brackets afterwards are pseudo-confidence-intervals with a Jan. 2016 cut-off on the low end and no cut-off on the high end.)
- 110,478 (95,870 – 141,657) active listings
- 59,668 (51,224 – 76,531) active whole-unit listings
- 21,345 (19,700 – 21,482) “full-time” whole-unit listings, comprising 14,813 (14,048 – 14,829) listings that were booked more than 60 days in the last year plus 6,532 (5,652 – 6,653) listings where less than a year’s worth of data was available that had the same proportion of bookings
Only 70% of the listings in the NYC region generated any revenue at all last year—a proportion which is more or less the same for whole-unit listings and private rooms. Also, slightly more than half of the total listings were for whole units. The distribution of all listings was shown in Figure 1 of my blogpost yesterday:
The subsequent maps all used the whole-unit listings booked for more than 60 days (which I called “full-time”).
What counts as a “full-time” listing?
One of Airbnb CEO Brian Chesky’s specific complaints about my analysis was that two months of occupancy doesn’t equal “full time”. Is he right? Why did I choose a 60-day cut off? What is “full time”, anyway? This is actually a tough question to answer, and I’m not yet sure that I’ve got the right answer, but here’s why I set the threshold where I did.
To begin with, there are different reasons you might be interested in defining “full-time” Airbnb occupancy. For instance, if you’re thinking about becoming an Airbnb host, you’d probably want to feel confident that your unit would be rented enough of the time to justify not doing something else with it (such as renting it with a standard 12-month lease, or selling it to someone else). But my reason is because I’m trying to assess Airbnb’s impact on the long-term residential rental housing market. A frequent accusation levelled against the service is that it is effectively encouraging the conversion of apartments into hotels, and I want to see if there’s evidence for this accusation. So I’m looking for a threshold which does a good job of separating units which are rented on Airbnb but probably still have a long-term tenant living in them from units which are rented on Airbnb enough that they probably don’t have a long-term tenant living in them.
The metric I am using is the number of days per year that a unit is occupied. And, to begin with, there’s no single threshold that will accurately classify every case. There are probably people who travel extremely frequently, and are able to keep a unit as their primary residence while still renting it on Airbnb for 200 days a year. And there are probably people who listed their unit year-round but set too high a price or are in an area with insufficient demand, and it only rented 25 days in total. Still, if the threshold is too low, we will get lots of false positives—for example by counting as “full-time” an apartment which was on Airbnb for a few weeks after one long-term tenant moved out and before another moved in, or an apartment which the long-term inhabitant puts on Airbnb during periods of occasional travel. On the other hand, if we set the threshold too high, we will get lots of false negatives, and end up underestimating the impact Airbnb is having on the rental market.
There’s no hard and fast rule here, but the threshold should probably be more than one month, since a one-month gap between long-term tenants isn’t uncommon. (Although it’s less common in New York, where turnaround times of two or even one week are standard, than in other cities.) A plausible high end for the threshold might be 100 days, because that is the number of days a unit would be rented if its permanent occupant managed to successfully rent it on Airbnb every single weekend of the year.
We also have to consider that almost no apartment has a 100% occupancy rate. Even if you have a competitively priced listing in a high-demand neighbourhood, sometimes you’ll get a Friday-Monday booking followed by a Wednesday-Sunday booking, and you won’t find anyone to rent your apartment for Monday and Tuesday night. In fact, according to my data and to Inside Airbnb—the gold standard third-party watchdog of Airbnb’s activity—the occupancy rate for frequently rented, whole-unit listings is a little more than 50%. This means, for instance that the average listing occupied 45 days a year was actually available on the website (and thus likely not occupied by a primary resident) for 90 days.
Inside Airbnb sets 60 days a year as its “frequently booked” threshold. As the site puts it, “Entire homes or apartments highly available and rented frequently year-round to tourists, probably don’t have the owner present, are illegal, and more importantly, are displacing New Yorkers.” So I opted to use the same standard, although I’m still not certain that’s the best trade off between false negatives and false positives.
Here’s what happens to the estimated number of full-time listings as the threshold increases:
It’s a negative logarithmic curve, which means that the values drop off quite quickly at first, and then the rate of decrease slows down. In other words, the overall estimate of full-time listings is relatively volatile in the lower portion of the threshold’s range—roughly for thresholds under 120 days. At the same time, I experimented with different thresholds and didn’t find that the basic findings of the study changed very much. The vulnerability index I created looks more or less the same with an estimated 20,000 full-time housing units lost to Airbnb (at a 60-day threshold) or with an estimated 12,000 units lost (at a 120-day threshold).
As a sidenote, Inside Airbnb’s overall estimates of Airbnb usage in New York are quite a bit smaller than mine. It’s not quite an apples-to-apples comparison, because their data was compiled in December 2016, and mine was in September 2016, but they estimate 40,227 active listings in New York City, 11,232 of which are full-time, whole-unit listings. My data is for the entire New York metropolitan region, but my equivalent figures for New York City are 91,811 active listings and 17,985 full-time, whole-unit listings. This is a big difference, and I believe it is mostly because Inside Airbnb has a much more aggressive threshold for determining which listings are in active use: there needs to have been a user review left for the listing in the last six months. If I move my “last-active date” threshold from September 2015 to January 2016, my numbers start to look closer to theirs; I estimate 78,798 active listings and 16,338 full-time, whole-unit listings. If I move my threshold to March 2016 (so roughly the same 6-month window that Inside Airbnb uses), I estimate 64,791 active listings and 14,903 full-time, whole-unit listings.
I think Inside Airbnb’s more aggressive filtering almost certainly gives a better “snapshot” of the current state of Airbnb usage in the city, but my tentative opinion is that aggregating data over a year the way I have done gives a better account of Airbnb’s impact on housing stock in the medium-term, particularly taking into account seasonal variation. This is still an open question in my mind, though.
The result of all this is that there’s a lot of uncertainty in interpreting Airbnb rental data, even when the underlying data is relatively clear. And it’s certainly possible that I’m setting my thresholds too aggressively. At the same time, for the purposes of estimating Airbnb’s impact on rental markets, I believe my assumptions remain quite conservative, for two reasons. First, I’m using the actual number of days a listing is occupied, instead of the number of days it is available to be occupied. The larger the number of days a listing is available on Airbnb, even if it isn’t rented each of those days, the more likely it becomes that the apartment isn’t being lived in by traditional long-term tenants. As an example, nearly 30,000 whole-unit listings in New York are available 240 days—or approximately 8 months—per year. This is how the estimate changes with the threshold:
Second, I totally exclude private-room rentals. In New York it is very common for renters to split apartments with roommates. And so it is likely that many of the frequently-rented private room rentals in the city are coming at the expense of “roommate wanted” ads on Craigslist, and thus directly reducing available long-term rental housing in the city. For reference, I counted 16,239 private-room listings rented for 60 days or more.
One the other hand, I don’t think any of this uncertainty actually matters very much except for the very specific task of estimating how much permanent rental housing has been lost to Airbnb. This is an important issue, so it’s good to have rigorous and defensible estimates. But for the issues relating to land economics and gentrification—above all the rent gap—my analysis turns out very similarly even with a wide range of different underlying estimates.
How to measure Airbnb’s impact on the housing market?
Assuming we have good estimates for the number of full-time, whole-unit rentals on Airbnb (which is a big assumption, given the discussion in the previous section!), measuring Airbnb’s impact on the housing market is very straightforward.
I simply took the counts of renter-occupied housing by census tract from the American Community Survey (table B25003, 2015 five-year estimates) and added my estimates of full-time Airbnb-occupied units to get the total number of existing rental housing plus “could be rental if it weren’t on Airbnb” housing. I divided the Airbnb rental estimates by this amount to get the percentage of rental units estimated to be full-time on Airbnb. This informed Figures 2 and 3 from my blogpost:
How to measure Airbnb’s rent gap?
The idea of Neil Smith’s concept of the rent gap is that over time, as a neighbourhood’s properties deteriorate, the actual revenue landowners are able to earn from their properties also tends to decline, but the possible revenue (were the properties to be redeveloped or renovated) tends to increase. If this “gap” between actual and potential revenue gets large enough, eventually it becomes likely that redevelopment capital arrives to take advantage of the profit-making opportunity. The result is renovations, new construction, displacement of existing tenants, and the arrival of more affluent tenants and homeowners—gentrification.
My idea about Airbnb and gentrification is that short-term rentals have created a new form of rent gap: one driven by sharply rising potential revenue, rather than gradually falling actual revenue. I have tried to measure this gap in two ways, one aimed at estimating how much new housing revenue has been generated thanks to Airbnb (i.e. where the rent gap was created and then filled), and one aimed at identifying areas where new potential profit-making opportunities are still quite prevalent (i.e. where the rent gap is growing and not yet filled).
To estimate the “filled” rent gap, I compared the amount of revenue generated in each census tract by Airbnb rentals with the total rental and homeowner costs otherwise incurred in these areas. Quantitative measurements of gentrification often want to look at capital expenditures (for renovations or new developments) as an index of new investments into the housing market. But, as I described in the previous blogpost, addressing the Airbnb rent gap typically requires little or no capital expenditures. So I came to the conclusion that the best way to measure the already-filled rent gap was simply to look at operating revenues from housing (also known as annual land rents).
The specific measures of ongoing housing revenue I used were “aggregate gross rent” (2015 ACS five-year estimates, table B25065) and “aggregate selected monthly owner costs” (2015 ACS five-year estimates, table B25089). Gross rent is the sum of the contract rent and any utility payments not included in the contract rent (in order to increase comparability between cases where utilities are included in the rent and where they are not). “Selected monthly owner costs” is meant to be the same bundle of expenses for homeowners (i.e. substituting mortgage payments for rent). In both cases, I used the aggregate amount of these payments made in a census tract, to approximate the total volume of routine money (i.e. not including capital expenditures) that flows through the housing market.
In the same way that we want to normalize our estimates of Airbnb’s impacts on rental housing stock by considering percentages of total housing stock instead of just the raw counts of full-time Airbnb rentals, we want to normalize our estimates of Airbnb’s impacts on housing revenue by considering percentages of total revenue streams instead of just the raw amounts of money Airbnb hosts earn. Figure 4 from the blogpost showed the result of this analysis—the areas where the Airbnb-generated rent gap was largest, but has now been filled:
To measure the “unfilled” rent gap, I compared Airbnb host revenues with what those hosts likely could have earned on the traditional rental market. The intuition here is that, in the absence of strong policies to prevent property owners from converting long-term rentals to short-term rentals, a rough revenue equilibrium should emerge between the two. If you are a landlord earning $2000/month in rent for an apartment, but you could be earning $4000/month if you put that same apartment on Airbnb, you will have a strong incentive to get rid of your current tenant and do just that. This, of course, is the rent gap. If enough landlords take advantage of these opportunities, we should expect 12-month rents to rise somewhat (in response to demand-side competition for a shrinking stock of rental units) and Airbnb rates to fall somewhat (in response to supply-side competition for a relatively fixed tourist demand). Some time later, we might find that median rents have risen to $2400 and average Airbnb revenues have fallen to $2800. Now the rent gap is much smaller, and there will be less pressure on landlords to convert long-term rentals to short-term rentals.
In order to measure the size of this outstanding rent gap, I compared the average revenue earned by full-time, whole-unit Airbnb listings in a given census tract with the median contract rent in that tract (2015 ACS 5-year estimates, table B25058). “Contract rent” is the direct revenue a landlord receives every month from a tenant, and is the most comparable to Airbnb revenues of the several different measures of rent payments that the ACS provides. The results of this comparison were shown in Figure 5 of my blogpost:
What is the vulnerability index?
As described above, I calculated two measurements of the Airbnb-induced rent gap in New York—one for the rent gap which has already been plugged, and one for the rent gap which is still open—and I wanted to combine them in a single vulnerability index.
I started by using cluster analysis on each of these maps to filter out some of the noise and identify consistent patterns across space. The particular kind of cluster analysis I used was the Anselin Local Moran’s I statistic, which analyzes a spatial distribution of features to identify statistically significant hot-spots (clusters of high values near other high values), cold-spots (clusters of low values near other low values), and outliers (high spots near mostly low spots, and vice versa). For instance, here’s what the cluster analysis found for my measure of the open rent gap, the ratio of average whole-unit, full-time Airbnb property revenue to median rent:
The pink areas are “high-high clusters” of census tracts that all have high ratios of Airbnb-revenue-to-median-rent, which means these are the areas most at risk of future Airbnb-induced gentrification. The red areas are also census tracts with high ratios, but they are isolated from other such areas, so they don’t represent big vulnerabilities. (Although investigating what is going on in these outliers could be interesting for future research.)
I performed the same cluster analysis on the measure of the already-filled rent gap (Airbnb revenue as a percentage of overall housing revenue), and similarly extracted the high-high clusters. I then combined these two distributions into a single map, noting the areas of overlap, to produce the vulnerability index:
For the time being I’ve left this map in its relatively “raw” form, with individual census tracts highlighted or not, but I’ve also considered consolidating this information at the neighbourhood scale. (For instance, showing the entire Lower East Side as purple, and all of Bed-Stuy as red.) This would sacrifice some of the map’s precision, but increase its readability. Such experiments are still to come….
The purpose of this post is to make explicit all the assumptions, approximations, and decisions that went into generating my analysis of Airbnb and gentrification in New York. It should be clear, at a minimum, that there is a large amount of uncertainty in this sort of analysis, for two reasons. First, since Airbnb doesn’t release any public data about its activities, researchers and policymakers alike are forced to rely on third-party estimates, which are inherently less accurate than official dat would be. Second, even if completely accurate data were available, there are some basic epistemological uncertainties—for instance, how many days does an apartment need to be rented on Airbnb before it is no longer in the regular long-term rental stock?
For both of these reasons, I would be very grateful for constructive criticism about any of the above, in order to improve the estimates and get us collectively closer to an evidence-based discussion about the impact of short-term rentals on our cities. The paper that this material is taken from will be submitted for peer review shortly, which will provide another opportunity for criticism. Once the methodology is rock-solid, I plan to apply it to other cities, and to develop further tools for comparing cities in a easy-to-understand and action-oriented way.