Is your job in another state?

National unemployment is high, but business is booming in some states. Vermont needs teachers. Nevada needs bartenders. North Dakota needs truck drivers and just about everything else.

Despite these opportunities, Americans aren’t moving much and unemployment remains high.  One reason for this is that moving can be expensive and disruptive, especially for those with families and roots in their communities. But another reason may just be lack of awareness about the opportunities in other states. That’s why I have made a new website: Enter your job skills, and the website will provide an interactive map showing where you are most in demand.

SecurityOfficerStates are ranked by their ratio of job postings to unemployment. This is a pretty good metric, but it isn’t perfect. To understand why, imagine two states with the same posting/unemployed ratio for a particular job. If you are trained for the job, you might have better luck applying in a state where the unemployed population is either untrained or unwilling to take that type of job, even though the two states have the same ratio. There also may be differences across states in the use of Still, I think my results have reasonably good face validity, and the results for many jobs are close to what you would except. If you average across jobs, you get something pretty close to an independently created measure called the “Opportunity Index”.

Job posting data was collected using the api. Unemployment numbers came from the Bureau of Labor Statistics. For more information about how this works, see my the GitHub repository.

My Insight Data Science Project

I just finished an excellent fellowship at Insight Data Science. During our first few weeks there, each of us designed a website to demo at Insight’s sponsor companies. My website is called DealSpotter.

This all started earlier this summer when I went to Craigslist to find a used car. There were lots of good deals on Craigslist, but it took way too long to find them. When I searched for a particular model, I got hundreds of hits, but only a few of the hits included the mileage in the posting title. Since I needed the mileage to know whether I was getting a good deal, I had to click on each of the hundreds of listings. Pretty time consuming.


A larger problem was that even if I clicked on every post, I didn’t always have a sense for what was the best deal. For example, if I had $3,000, was it better to spend it on a 2001 model with 100K miles, or a 2003 model with 140K miles?

DealSpotter is a proof-of-concept website that shows how these problems could be solved. DealSpotter grabs all the Craigslist car postings in the San Francisco Bay Area and automatically shows you the best deals. It knows how much each car should be priced, based on the model, year, and mileage. Cars that are priced lower than DealSpotter expects them to be are shown at the top of the list. DealSpotter also presents the same information in a visual format called “Graph” mode, where the best deals are highlighted in blue.


To determine how much each car should be priced, DealSpotter doesn’t use Kelley Blue Book, which tends to overprice cars, especially newer models. Instead, DealSpotter builds its own pricing model based on the actual Craigslist market. In particular, it uses a Random Forest pricing model because, unlike smooth parametric models, Random Forests are able to detect sharp discontinuities in prices that may be caused by factors such as manufacturer design overhauls.

By selecting cars that are priced much lower than would be expected based on year and mileage, DealSpotter picks out some incredible deals, as well as the occasional clunker with an accident history. A more elaborate service might find a way to filter for accident history, but for now DealSpotter remains useful because it greatly narrows down the scope of the search for users. Once users are dealing with a handful of posts, they can easily inspect the text of the ad to determine which cars are good deals, and which have a history of accidents.

If you are in the San Francisco Bay Area and are looking for a used car, you should definitely check out my website right now. Many cars are underpriced by thousands of dollars. In the future though, I won’t be updating the listings, which will soon become outdated. Craigslist has a history of suing other services that try to improve on how their data is presented. Craigslist’s litigiousness is understandable — they curated the data after all. But it apparently has also stifled innovation. Craigslist users spend many hours of their time clicking on blue links because the website’s search and UI tools are still stuck in the 90’s. Users are also at higher risk of scams because there is no reputation system. Normally, issues like this would put a company out of business, but a combination of lawsuits and network lock-in effects have kept Craigslist at the top of classifieds services. Hopefully, we will one day get a better Craigslist. In the meantime, if you want to find an incredible deal on a car while the postings are still fresh, you should do so now.

FAQ for

I made a new webpage,

Here’s an FAQ for it.

Q. What is this and why did you make it?
A. There is a surprising amount of consensus among economists on many issues. Progressive consumption taxes and carbon taxes are good. Personal income taxes and corporate taxes are bad. Congestion pricing is good. The mortgage deduction is bad. Marijuana should be legalized. These positions are endorsed by almost every economist, both from the left and the right, but politicians in Washington tend to support the opposite.

The IGM Forum surveys an ideologically diverse group of top economists on these and other issues. I wish more people knew about their website. My new webpage,, collects responses from the IGM forum and allows users to compare it to their own responses.

Q. Why is the economist closest to me on the graph different from the economist who actually is closest to me, according to the text below the graph?
A. Each economist can be thought of as a point in a massive 105 dimensional space, and unfortunately it’s only possible to display 2 dimensions. While you may appear close to an economist on those 2 dimensions, you may be far apart on the 103 other dimensions that you can’t see.

Q. I don’t have the expertise to answer some of these questions. Should I leave them blank or should I click “Neutral”?
A. You should leave them blank so that they do not enter the calculations. “Neutral” indicates that you have a real opinion somewhere between “Agree” and “Disagree”

Q. Every question I answer makes me move very far on the graph. This seems unreliable.
A. Do not take your graph position seriously until you have answered at least 20 questions. Your position will gradually converge as you answer more.

Q. Responses that “strongly deviate from expert consensus” are highlighted in yellow. What does that mean?
A. It means that your response deviated more that two standard deviations from the IGM panel average.

Q. I have answered only one question and it has put me in an extreme part of the graph. However my response does not get highlighted as a deviating response. Is something wrong?
A. To calculate your position in the graph, I use your past responses to make assumptions about your future responses. If you have only answered a few questions, those assumptions will be far from accurate.

Q. I just answered a question the exact same way as Economist X. But my position on the graph moved away from him/her. Why?
A. This is a natural consequence of projecting multiple dimensions onto two dimensions. To see why, take a cube-shaped object and trace your finger along the edges from one corner to the opposite corner. Viewed from some angles, your finger might sometimes appear to move away from the destination corner.

Q. I’m finding some other unexpected behavior not explained by the previous two questions.
A. There is some built-in bias resulting from mean-centering and a dummy question I added so that movement occurs after the first response. This bias will affect your position when only a few questions have been answered but it will become negligible when many questions have been answered.

Q. Why were some IGM panel economists excluded from your webpage?
A. Economists who answered less than 75% of the questions were excluded.

Q. Can you interpret what the first two principal components represent?
A. I left the axes uninterpreted because I don’t want to oversimplify things: The first two PCs only explain 20% of the variance, and they are biased by the choice of questions. But ok, I’ll bite: The horizontal axis appears to represent the left-right axis in partisan American politics, with strong weights on emotionally charged issues like school vouchers and the minimum wage. But — and I can’t stress this enough – the horizontal axis is not identical to our verbal understanding of the left-wing / right-wing continuum. Our verbal understanding of this concept corresponds to a complicated, bendy, twisty dimension within the 105 dimensional space, and it probably explains about 60% of the variance in responses. The horizontal dimension explains a meager 12% of the variance. This means that some responses that are actually left-wing will correspond to rightward movements and vice-versa. The vertical dimension is even harder to interpret. There are large weights on questions pertaining to bank regulation and monetary policy.

To give you a better sense of the dimensions, here are some questions for which answering “strongly agree” will push you far in one of the directions.

Top 5 leftward questions. 

(1) Question B: The distortionary costs of raising the federal minimum wage to $9 per hour and indexing it to inflation are sufficiently small compared with the benefits to low-skilled workers who can find employment that this would be a desirable policy.

(2) Question C: Taking into account all of the economic consequences — including effects on corporate managers’ incentives and on creditors’ expectations of how their claims will be treated in future bankruptcies — the benefits of bailing out GM and Chrysler will end up exceeding the costs.

(3) Question A: Taking into account all of the economic consequences — including the incentives of banks to ensure their own liquidity and solvency in the future — the benefits of bailing out U.S. banks in 2008 will end up exceeding the costs.

(4) Question A: The U.S government should make further efforts to shrink the size of the country’s largest banks — such as by capping the size of their liabilities or penalizing large banks more heavily through taxes or other means — because the existing regulations do not require the biggest banks to internalize enough of the “too-big-to-fail” risks that they pose.

Top 5 rightward questions

(1) Public school students would receive a higher quality education if they all had the option of taking the government money (local, state, federal) currently being spent on their own education and turning that money into vouchers that they could use towards covering the costs of any private school or public school of their choice (e.g. charter schools).

(2) Question B: Past experience of public spending and political economy suggests that if the government spent more on roads, railways, bridges and airports, many of the projects would have low or negative returns.

(3) Laws that limit the resale of tickets for entertainment and sports events make potential audience members for those events worse off on average.

(4) New technology for fracking natural gas, by lowering energy costs in the United States, will make US industrial firms more cost competitive and thus significantly stimulate the growth of US merchandise exports.

Top 5 upward questions

(1) Even if inflationary pressures rise substantially as a result of quantitative easing and low interest rates, the Federal Reserve has ample tools to rein inflation back in if it chooses to do so.

(2) Taking into account all of the economic consequences — including the incentives of banks to ensure their own liquidity and solvency in the future — the benefits of bailing out U.S. banks in 2008 will end up exceeding the costs.

(3) Even if the third round of quantitative easing that the Fed recently announced increases annual consumer price inflation over the next five years, the increase will be inconsequential.

(4) Despite relabeling concerns, taxing capital income at a permanently lower rate than labor income would result in higher average long-term prosperity, relative to an alternative that generated the same amount of tax revenue by permanently taxing capital and labor income at equal rates instead.

Top 5 downward questions

(1) Public school students would receive a higher quality education if they all had the option of taking the government money (local, state, federal) currently being spent on their own education and turning that money into vouchers that they could use towards covering the costs of any private school or public school of their choice (e.g. charter schools).

(2) Question A: The U.S government should make further efforts to shrink the size of the country’s largest banks — such as by capping the size of their liabilities or penalizing large banks more heavily through taxes or other means — because the existing regulations do not require the biggest banks to internalize enough of the “too-big-to-fail” risks that they pose.

(3) The former head of the Transportation Security Administration is correct in arguing that randomizing airport “security procedures encountered by passengers (additional upper-torso pat-downs, a thorough bag search, a swab test of carry-ons, etc.), while not subjecting everyone to the full gamut” would make it “much harder for terrorists to learn how to evade security procedures.”

(4) Question C: Unless there is a substantial default by some combination of Greece, Ireland, Italy, Portugal and Spain on their sovereign debt and commercial bank debt, plus credible reforms to prevent excessive borrowing in the future, the euro area is headed for a costly financial meltdown and a prolonged recession.

New model of binocular rivalry

Binocular rivalry is a visual illusion that occurs when the two eyes are presented with incompatible images. Instead of perceiving a mixture of the two images, most people experience alternations in which only one image is visible at a time. Binocular rivalry works best under controlled laboratory conditions with prisms or mirrors, but if you are lucky you might be able to experience it in the figure below. Try crossing your eyes to align the left boxes and right boxes, so that three boxes are observed rather than two. If you can keep your eyes stable, you might perceive alternations between the two different gratings in the middle box. It helps if you first try to merge the “Merge me!” phrase and then, once that it is stable, focus on the middle box. If you can’t stabilize your eyes enough, don’t worry. You are not alone.




Binocular rivalry is more than just an interesting illusion: it reflects actual inhibitory competition between neurons in the brain, and therefore provides a rare window into neural dynamics. To help us understand these mechanisms, researchers have developed several models of the phenomenon. Yet surprisingly, all of these models make a big incorrect prediction about a type of stimulus known as “binocular plaids”. You can view some binocular plaids by crossing your eyes on the boxes below, or simply by looking at one of the boxes normally.




As you can see, a plaid is composed of two gratings, a rightward pointing grating and a leftward pointing grating. The big, incorrect prediction made by previous models of rivalry is that the leftward pointing grating should alternate with the rightward pointing grating, just as it would in the traditional rivalry stimuli shown above. This prediction — which follows because the same neural inhibition that creates competition in the first figure must necessarily also create competition in the second figure — is clearly wrong: When viewing the binocular plaid, you probably perceive that the rightward grating remains just as strong as the leftward grating, without any alternations. This failed prediction extends far beyond these toy stimuli. Plaid perception is typically explained by the broad theory of divisive normalization, which also covers a whole host of other inhibitory interactions in cortex. Models of the inhibitory processes in rivalry are thus in tension with models of inhibitory processes that use divisive normalization.

Together with my advisor David Heeger, I developed a new model of rivalry that is able to accommodate plaids, and which I hope reconciles models of rivalry with models of normalization. Finding a solution was not as easy as you might think. When we presented the problem to colleagues, everyone immediately had intuitions for how to solve it, but amazingly none of them worked. We found only one solution that worked, and it is one that I later discovered was once proposed by Randolph Blake. The model makes novel predictions that we confirmed with psychophysical tests. If you want to read more about it, you can find the paper below. The Matlab code is available here.

Said CP & Heeger DJ (2013). A model of binocular rivalry and cross-orientation suppression. PLOS Computational Biology.

Canadian funding models for all

The US and Canada have very different systems for funding science. To compare them, I found some of the publicly available data on NIH R01s (USA) and NSERC Individual Discovery Grants (Canada), and plotted them below. Before describing the results, I should say that comparing NIH to NSERC is a bit like comparing apples to oranges, since NSERC is probably closer to the NSF than to the NIH. Nevertheless, the cross-country trends hold up across agencies, and in any case my goal is not to compare countries (as much as I would like to) but to compare funding models.


There are two things to notice about the plots. First, the funding rates are clearly higher at NSERC than at NIH. The catch, of course, is that higher rates mean smaller awards. NSERC typically provides $35,000/year, far less than the big awards from NIH. Canadian scientists love their system, valuing the stability it provides more than the possibility of large awards. Quality of life issues aside, a separate question is: Does the NSERC system produce better science? Or do the high success rates waste too much money on low-quality projects? My feeling is that the NSERC system is much better. High-quality NIH proposals are routinely rejected for arbitrary reasons, and the sink-or-swim culture is directly contributing to bad research practices. We should move toward a higher rate / smaller award system. And for those who see value in large awards, we can still adjust the size of the award based on the quality of the proposal.

The second thing to notice about the plots is the trends over time. At NIH, more so than at NSERC, the decline in success rates is driven by an increase in the number of applicants, not by a decrease in the number of awards. I don’t think the solution is just “more funding”, especially in the current fiscal climate. We have a denominator problem, not a numerator problem. We should fix the system that rewards programs for producing more PhDs than the system can accommodate. I’ll leave it to actual experts to decide how to do this. But as with most public policy questions, a good place to start is to just copy whatever the Canadians are doing.

Eight Lessons from the Reproducibility Crisis

  1. There is a reproducibility crisis in psychology.
  2. Outright fraud is rare. Soft forms of bad practice are the bigger problem.
  3. Most scientists are honest, but soft forms of bad practice emerge through self-deception or lack of awareness.
  4. The problem is worse in medical research, but that is no excuse for psychologists to resist reforms.
  5. Lists of new regulations are fine, but the core issue is that career incentive structures are not always aligned with truth discovery.
  6. Some data outcomes are rewarded more than other data outcomes. This is bad.
  7. Journals have little incentive to change this incentive structure themselves.
  8. But granting agencies can help, by increasing the grant award probability to scientists who submit to good practice journals. Can someone at NIH/NSF please do something about this?

If you have comments, they might already be addressed in my FAQ.