Obviously, I think the paper itself is worth reading but there were a lot of higher level / meta aspects of the paper I thought were notable.
The last time Nathan and I collaborated on something, we managed to go from idea to publication in just under five months, which (to us) was pretty remarkable. However, thanks to a series of very fortunate events, this paper went from idea to publication in a little over four months (!!).
We came up with the idea at a work party, and we were aided by the fact that Stanford’s winter break had just begun. The break allowed us to just focus on this one project for a while, constantly working on it, thinking about it, and texting each other about it. This had the effect of constantly keeping the project moving, but (I think) more importantly, it also gave us a lot of time to let the idea marinate and quickly iterate with no task switching. For weeks, we got to just think deeply about one problem and then try to code it up. By the time the winter break was over, we had a pretty good simulation and some preliminary results. Then Nathan had a baby (!!), which limited how much he could work but also meant when he was working this was the only project he needed to work on. Finally, like the last project, we were lucky to find a journal that agreed to expedite our review, and after addressing comments from four thorough reviewers, the rest is history.
After the media storm settled down, we were chatting about the project and it encapsulated all the best parts of academia:
- It’s an important project with a clear desired outcome (in this case, changing policies to increase vaccination rates).
- It involved working with nice people โ not just smart people (plenty of those in academia) but also those you have compatible collaborative styles with and are fun.
- It was an exciting project. Literally something you would want to stay up late to work on โ evidenced by our many 10pm Zoom calls (after the kids went to bed).
- There was no administrative burden here โ the project was unfunded so no time was spent applying for grants, it was a simulation using public data so no IRB was necessary, etc. Most research projects have these administrative tasks (“the cost of doing research”), and this one had none of it.
- It provided opportunities to learn new things. By leveraging non-overlapping expertise across the team, there was a lot of opportunity to learn about things outside your area.
- It was challenging but solvable. It had all sorts of technical and computational challenges. For example, we wanted to make sure we retained each individual simulation and could query them quickly (for debugging or calculating quantities of interest or estimating complication rates), but retaining all the simulations would be something like 2TB of space in the early version of the code and actually querying the data naively would have been impossible on our local machines. Through a mix of optimization (for example, only saving the required columns), tweaks (recasting columns into integers when possible), new technologies (saving as parquet files and querying using duckdb), and sometimes brute force (using the Stanford cluster), we were able to take what was a pretty intractable computation problem into a manageable inconvenience.
- The journals and editors were on top of their game. We asked a few journals for expedited review and most of them gave us the courtesy of a prompt no, which was incredibly helpful! It saved us time, it saved the potential reviewers time, it saved the editors time, it’s just good for everybody. No waiting on the desk for review.
- The reviews were constructive.ย No vague criticisms or unhelpful remarks โ all four reviewers provided just thoughtful, thorough critiques and, most importantly, concrete suggestions for how to address them.
- It was the only project. It’s basically impossible in academia that you get to focus on a single problem or task for weeks at a time, but when you do it reminds you how much more efficiently you can address a problem when you have the time to think about it deeply.
Not every project will check every box, but I think it’s a useful set of criteria when trying to decide on your next project or prioritizing your current projects.
Anyways, it’s been quite the experience. The paper is here, the code is here, and reproducible data is here.ย The paper managed to attract quite a bit of press โ I think this may be the second highest Altmetric score of any project I’ve been involved with โ but some of my favorites are:
- The Washington Post
- Newsweek
- Popular Science
- Reuters
- Wired
- Financial Times
- CNN
- CBS
- The Economist
- Scientific American
- We were also on the front page of both the Los Angeles Times website and the printed version (above the fold too!)
- An editorial from The Wall Street Journal
- And my personal favorite, the study was a Wait Wait Don’t Tell Me question
We’re now trying to get the opportunity to present the paper in front of relevant health-related Congressional committees (they aren’t replying to our emails), so if anybody has a connection there, please get in touch.