Friday, April 14, 2017

Why the 101 model doesn't work for labor markets

A lot of people have trouble wrapping their heads around the idea that the basic "Econ 101" model - the undifferentiated, single-market supply-and-demand model - doesn't work for labor markets. To some people involved in debates over labor policy, the theory is almost axiomatic - the labor market must be describable in terms of a "labor supply curve" and a "labor demand curve". If you tell them it can't, it just sort of breaks their brain. How could there not be a labor demand curve? How could there not be a relationship between the price of something and how much of it people want to buy?

Answer: If you can't observe it, you might as well treat it as if it doesn't exist.

People forget this, but demand curves aren't actually directly observable. They're hypotheticals - "If the price were X, how much would you buy?" You can give people a survey, but the only way to really know how much people would buy is to actually move the price to X. And the only way to do that is to shift the supply curve. But how do you know what the supply curve is? The only way is to shift the demand curve!

This is called an identification problem. Unless you can observe something that's clearly a shock to only one of the curves but not the other, you can't know what the curves look like. (Paul Romer explains this in section 4.1 of his essay "The Trouble With Macroeconomics".)

And with labor markets, it's very hard to find a shock that only affects one of the "curves". The reason is because almost everything in the economy gets produced with labor. If you find a whole bunch of new workers, they're also a whole bunch of new customers, and the stuff they buy requires more workers to produce. If you raise the minimum wage, the increased income to those with jobs will also boost labor demand indirectly (somehow, activist and businessman Nick Hanauer figured this out when a whole lot of econ-trained think-tankers missed it!).

Labor is a crucial input in so many markets that it really needs to be dealt with in general equilibrium - in other words, by analyzing all markets at once - rather than by treating it as a single market in isolation. That makes the basic Econ 101 partial-equilibrium model pretty useless for analyzing labor.

"But," you may say, "can't we make some weaker assumptions that are pretty reasonable?" Sure. It makes sense that since it takes some time for new businesses to be created, a surge of unskilled immigration should represent a bigger shock to labor supply than to labor demand in the very short run. And it makes sense that a minimum wage hike wouldn't raise labor demand enough to compensate for the wedge created by the price floor.

With these weaker assumptions, you can get a general sense of the supply and demand curves. Problem: The results then contradict each other. Empirical results on sudden unskilled immigration surges indicate a very high elasticity of labor demand, while empirical results on minimum wage hikes indicate a very low elasticity of labor demand. Those can't both be true at the same time.

So if you accept these plausible, weak identifying assumptions, it still doesn't make sense to think about labor markets as described by an S curve and a D curve.

Of course, you could come up with some weird, stinky, implausible identifying assumptions that could reconcile these empirical facts (and the various other things we know about labor markets). With baroque enough assumptions, you can always salvage any theory, as Romer points out (and as Lakatos pointed out). But at some point it just starts to seem silly.

In fact, there are a number of other reasons why the Econ 101 theory isn't a good fit for labor markets:

1. Supply-and-demand graphs are for one single commodity; labor is highly heterogeneous.

2. Supply-and-demand graphs are static models; because of labor laws and implicit contracts, labor markets involve lots of forward-looking behavior.

3. Supply-and-demand graphs are frictionless; labor markets obviously involve large search frictions, for a number of reasons.

If Econ 101 supply-and-demand models worked for every market, the vast majority of the modern economics profession would be totally useless. Claiming that the econ 101 model must always be a good model basically says that most of econ is barking up the wrong tree, and that it's just all in Marshall. Fortunately, economists tend to be a smart, scientifically-minded bunch, and so they realize that general equilibrium effects, heterogeneity, forward-looking behavior, search frictions, etc. exist, and often are essential to understanding markets.

The Econ 101 supply-and-demand model is just not a good description for the labor market. The theoretical construct known as "the labor demand curve" is ontologically suspect, i.e. it is a poor modeling choice. If we adopt some sort of positivist or empiricist philosophy - "if I can't observe it, it might as well not exist" - then we might as well say that "the labor demand curve" doesn't exist. It's not an actual thing.

Thursday, April 13, 2017

Ricardo Reis defends macro

I really like this defense of macroeconomics by Ricardo Reis. He makes it clear that he's sort of playing devil's advocate here:
While preparing for this article, I read many of the recent essays on macroeconomics and its future. I agree with much of what is in them, and benefit from having other people reflect about economists and the progress in the field. But to join a debate on what is wrong with economics by adding what is wronger with economics is not terribly useful. In turn, it would have been easy to share my thoughts on how macroeconomic research should change, which is, unsurprisingly, in the direction of my own research. I could have insisted that macroeconomics has over-relied on rational expectations even though there are at least a couple of well developed, tractable, and disciplined alternatives. I could have pleaded for research on fiscal policy to move away from the over-study of what was the spending of the past (purchases) and to focus instead on the spending that actually dominates the government budget today (transfers). Going more methodological, I could have elaborated on my decade-long frustration dealing with editors and journals that insist that one needs a model to look at data, which is only true in a redundant and meaningless way and leads to the dismissal of too many interesting statistics while wasting time on irrelevant theories. However, while easy, this would not lead to a proper debate
Reis goes on to defend academic macro from some of the main recent criticisms, including:

  • Macro relies on representative agents
  • Macro ignores inequality
  • Macro ignores finance
  • Macro ignores data and focuses mainly on theory

He gives a sampling of 8 job market papers by recent highly successful job market candidates, and a sampling of recent articles in the Journal of Monetary Economics. This actually seems like a pretty stringent test of the criticisms to me - job market papers are probably weighted toward theory, for signaling purposes, while the JME has a reputation as a very (small-c) conservative macro journal.

But as Reis shows, modern macro papers generally don't fit the caricature described above. There's lots of heterogeneity in the models, a fair amount of attention to inequality and distributional concerns, plenty of finance, and lots and lots of data.

Reis is right; a lot of these criticisms are now out of date. That doesn't mean they were never right, though. There was a time when macro models mostly did use representative agents, when financial sectors were rarely modeled, and when calibration served as the only empirical test of many models. The point is not that the critics were full of it, but that macroeconomists were aware of the problems and moved their field in the direction it needed to go. Macro is a dynamic field, not a hidebound one.

And Reis himself shows that macroeconomists - at least, many of them - know of a number of areas where the field still needs to improve. He wants to move away from exclusive reliance on rational expectations, and stop forcing authors to stick in theory sections when they're not really needed.

This all sounds great to me. Personally, I'm particularly happy about the increase in "micro-focused macro". Arlene Wong's JMP, which Reis references, is a great example of this. Very cool stuff. Basically, finding out more micro facts about the effects of business cycles will help guide new theories and (ideally) discipline old ones.

But one problem still nags at me, which Reis doesn't really address. Why didn't macro address some of these problems earlier - i.e., before the crisis? For example, why was finance so ignored? Sure, there were some macro models out there that included finance - even a few prominent ones - but most researchers modeling the business cycle didn't feel a need to put financial sectors in their models. Another example is the zero lower bound, and its importance for monetary policy. A few macroeconomists were definitely clued into this years before the crisis, but they seem to have been a less-influential minority, mostly confined to international macro. In the runup to the crisis, macro researchers were generally not sounding the alarm about the danger from financial shocks, and after the recession hit and rates went to zero, many leading macroeconomists still dismissed the idea of fiscal stimulus.

Fixing problems quickly is great, but it's also important to ask why the problems were there in the first place.

One possible answer is sociological - as Paul Romer tells it, it was largely the fault of Robert Lucas and Thomas Sargent for bullying the profession into adopting bad models (or the fault of their early critics like Solow for bullying them into becoming bullies).

I don't know how true that story is. But I do think there's another potential explanation that's much more about methodology and less about personalities. One thing I still notice about macro, including the papers Reis cites, is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.

That seems like a problem to me. If you have an infinite collection of models sitting on the shelves, how does theory inform policy? If policy advisers have an endless list of models to choose from, how do they pick which one to use? It seems like a lot of the time it'll come down to personal preference, intuition, or even ideology. A psychologist once joked that "theories are like toothbrushes...everyone has one, and nobody wants to use anyone else's." A lot of times macro seems like that. Paul Pfleiderer calls this the "chameleon" problem.

It seems to me that if you want to make a field truly empirical, you don't just need to look at data - you need to use data to toss out models, and model elements like the Euler equation. Reis' suggestion that journal editors stop forcing young authors to "waste time on irrelevant theories" seems like one very good way to reduce the problem of model proliferation. But I also think macro people in general could stand to be more proactive about using new data to critically reexamine canonical assumptions (and a few do seem to be doing this, so I'm encouraged). That seems like it'll raise the chances that the macro consensus gets the next crisis right before it happens, rather than after.

Sunday, April 09, 2017

Can rationalist communities still change the world?

In my last post, I recounted some historical examples of times when (broadly defined) rationalist communities - groups of smart generalists debating and trying to figure things out - changed the world. A number of people noted that my examples are all from the fairly distant past, and have asked whether similar changes are possible today. 

Well first of all, I think there's a selection bias at work in our assessment of who "changed the world". The Royal Society looks world-changing now, but in its day it probably just looked like a bunch of eccentric tinkerers and nerds. It took centuries of progress, based on the foundations discovered in the 17th century, for those contributions to be properly recognized as world-shaking. 

Second, I think whether groups are able to make big changes, especially in the social sciences, depends in large part on external events - i.e., whether there are big political changes happening at the time, for other reasons. The Meirokusha came along during a time when Japan was undergoing rapid opening and industrialization, and the Scottish Enlightenment came just before the Industrial Revolution, a wave of revolutions in Europe, and the formation of the U.S. and the British Empire. Right now, the world is in a relative period of stability (knock on wood) - it's hard to find recent groups that changed the world, because the political world just hasn't changed that much.

But given these caveats, I think it's still clear that a relatively small community of smart people can change the world. 

One example would be the group of physicists, mathematicians, and engineers who came out of Europe in the early 20th century - Einstein, von Neumann, Bohr, Fermi, Schrodinger, and all the rest. These folks did a lot of groundbreaking physics and math, but they also invented refrigerators and nukes, made major advances in economics and computing, and probably did more to reinvent science than anyone since the Royal Society. The modern world is largely built around technologies and ideas that came out of that community. They weren't as cohesive of a group as the Royal Society, but they did mostly know each other, and there was probably significant cross-pollination of ideas. 

Another group that changed the world was the Chicago School of economics. In the mid to late 20th century, the Chicago economics department saw a remarkable confluence of talent - Gary Becker, Robert Lucas, Milton Friedman, Ronald Coase, Frank Knight, and many others. The ideas that came out of that community changed the face of modern society. Some of those changes are things that many people don't like, but the same is true of the Scottish Enlightenment, the Meirokusha, the Progressives, the Fabian Socialists - indeed, the same is true of any social science community. The Chicago School thinkers were specialized in social science, but within that broad category, their ideas were remarkably general, dealing with almost every important social, political, and economic issue of the day. 

Both of these communities could also reasonably be described as "rationalist". They were specialized, but not hyper-specialized - Einstein invented a refrigerator, Milton Friedman wrote political philosophy, etc. They had their ideological biases, at both the individual and the group level, but most of them were keen on figuring out how the world really worked (though there might have been exceptions to this). 

Of course, just because it happened recently doesn't mean it could happen again. The 20th century might have seen the last great scientific advances that the human race will ever make. The ideas of the Chicago School might be the last coherent, original, influential outpouring of social thought. But note that this was just as possible in 1870, or in 1570, as it is today. There were probably scientists and social thinkers in those days who believed that everything important had been discovered and created. Almost by definition, big paradigm shifts in either natural or social science come unexpectedly. 

A more worrying argument is that modern intellectual communities are too highly specialized. Some believe that science is just so hard now that progress can only be made by teams of super-specialized people digging deep into one domain. Others think that the incentive structures of modern academia, business, government, etc. encourages too much specialization

I'm not sure whether science is too hard for generalist communities of polymaths to make big breakthroughs. The biggest scientific breakthroughs usually involve the establishment of completely new fields that few people were working on before - physics in the 1600s, electrical engineering in the 19th and early 20th centuries, computer science in the 20th century. We might have run out of new domains of knowledge, or we might not have - in fact, we'll never know whether we have or not.

I do worry about incentives. Modern academia is very siloed - there's a lot of pressure to publish in your own field's highly specialized journals. Scholars who venture into other areas, with new perspectives - think of Gary Becker trying to do sociology - are often resisted and even vilified as "imperialists" by the existing research community. Specialized academic communities can even become sort of like "mafias", resistant to new ideas. 

I doubt that this is quite as big a barrier as many fear. The smartest people in their fields - think Terence Tao in math, Feng Zhang in biology, or Ivan Werning or Markus Brunnermeier in economics - have zero problem getting published in their own field, and have plenty of time to work on and think about other things. I picked these examples because they're all obviously highly curious people who have explored a lot of different fields within their discipline. Similarly, I don't think professional "mafias" will be able to successfully resist new ideas if those new ideas work. If Terence Tao went out tomorrow and made a macroeconomic theory that could predict the effects of monetary policy really well, I doubt even the most concerted resistance by macroeconomists could stop it from being accepted.

Meanwhile, the internet has opened up tons of opportunities for collaboration and cross-pollination. The economics blogosphere is a good example of this. In many ways, it's one of the most successful rationalist communities around today. Econ bloggers are often accomplished academics - Paul Krugman, Paul Romer, Narayana Kocherlakota, Brad DeLong, and others have all held faculty posts at top schools. But the range of topics the blogosphere deals with is fantastically wide - everything from presidential politics to art and culture to the history of science. 

And though it's hard to tell, I'd say the blogosphere has had some real influence. The voluminous discussions of fiscal policy, as well as Krugman's forceful advocacy, have probably made austerity less popular across the developed world. Word of mouth tells me that relentless blogger criticism of macroeconomics has helped push younger academics toward more empirical and more micro-grounded research (and I think you can already see this in the literature). By publicizing the discoveries of academics' mistakes, such as with the Reinhart-Rogoff affair, econ blogs are also leading to a democratization of research evaluation and critique that might eventually challenge, or complement, the peer review system. And new ideas have come from the blogosphere - for example, Robin Hanson's use of signaling to explain social phenomena.

But the econ blogosphere has a problem - in order to have continued and expanded relevance, we need new people and we need more brain power. Much of the impetus for the efflorescence of blogging between 2009 and 2013 came from the Great Recession. The current crop of bloggers has had some spectacularly interesting exchanges - for example, Steve Williamson and Narayana Kocherlakota have a long-running monetary policy debate on Twitter that is more interesting than any other such debate I've ever seen. But for the blogosphere to become a rationalist community for the ages, we need more very smart people. We need polymathically inclined folks like Brunnermeier and Werning to start blogs, and to have exchanges of ideas on wide-ranging topics. If that were to happen, the blogosphere might eventually have an influence up there with the Chicago School. Currently, that is still a distant dream. 

Saturday, April 08, 2017

When rationalists remade the world

Defending the "rationalist" community from its most recent crop of assailants, Scott Alexander writes:
There have been past paradigms for which...criticisms [of rationality] are pretty fair. I think especially of the late-19th/early-20th century Progressive movement. Sidney and Beatrice Webb, Le Corbusier, George Bernard Shaw, Marx and the Soviets, the Behaviorists, and all the rest. Even the early days of our own movement on Overcoming Bias and Less Wrong had a lot of this. 
But notice [I posted] book reviews, by me, of books studying those people and how they went wrong. So consider the possibility that the rationalist community has a plan somewhat more interesting than just “remain blissfully unaware of past failures and continue to repeat them again and again”... 
Look. I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. But we’ve noticed the skulls. We’ve looked at the creepy skull pyramids and thought “huh, better try to do the opposite of what those guys did”.
This is good (though I think the Progressives did pretty darn well). But I think it's important to realize that rationalism's history is also full of startling, dramatic successes. It isn't all skulls. There are also...whatever positive metaphor-thing is the opposite of skulls. It's hard to know what changes the course of the human race, but there are cases where there's a good argument to be made that rationalist communities changed the course of the human race.

The most important example, of course, is the Royal Society in 17th and 18th century Britain. which included many of the first modern scientists, and laid the foundation for most of modern science. It included people like Robert Hooke and Robert Boyle, and of course Isaac Newton. A basic high school mechanics course is just a rehash of what these people discovered.

The Royal Society were not specialists buried deep within highly focused academic departments, but generalists - "natural philosophers", in an age when "scientist" wasn't yet a job description. Newton spent years writing about theology and served as master of the Royal Mint. Hooke investigated optics and gases, but also studied fossils and wrote about evolution. Etc. They were just a bunch of smart folks who liked to think about things. They weren't always friends - Newton and Hooke famously didn't get along - but they had a rationalist approach, powered by constant discussion and exchange of ideas.

And the Royal Society can be seen as part of a larger, longer-term European rationalist community. Johannes Kepler, Galileo Galilei, Rene Descartes, Francis Bacon, and Gottfried Leibniz are just a few other generalists who wrote letters and articles propounding some form of the new rationalist philosophy that swept over the European continent from the late 1500s through the 1700s.

A second example, fairly close in time and place to the first, was the Scottish Enlightenment. A relatively small group of Scottish people based in Edinburgh and Glasgow came up with many of the social and political philosophies upon which the modern world is based. These included Henry Home (Lord Kames), Adam Smith, and David Hume. They went over to each other's houses and had drunked debates all in Latin, and formed a club called the Select Society that eventually gave birth to the Edinburgh Review. All were intellectual generalists, not specialists, and their way of thinking bears the unmistakable mark of rationalism. (Sadly, it doesn't seem Lord Kames and Thomas Bayes had any significant contact, though they went to the same university at the same time.)

A third example is Meirokusha, a philosophical society in Meiji Japan. The most famous member of Meirokusha was Fukuzawa Yukichi, who founded Keio University and is now on the 10,000-yen bill (similar to the $100, which is funny, because Fukuzawa is similar to Ben Franklin). The Meirokusha tasked themselves with figuring out what kind of society Japan should be, in the turbulent years following the country's emergence from international isolation. Some of them favored Confucianism, and others favored Western political and social philosophy, but all were well-versed in both, and most had lived overseas. It even had an American member, William Elliot Griffis.

Meirokusha tried publishing a magazine, but the authorities of the time shut it down. But the society's members had a very large and lasting influence on the direction of Japan's development. Fukuzawa was known as a proponent of a strong educational system and aggressive import of foreign ideas and technology. Those ideas are clearly visible as the foundations of Japan's approach to modernization. He also advocated a moral philosophy of individualism and self-reliance. I know less about the other Meirokusha members; I should learn more, since people regard them as highly influential.

So there are three examples of historical communities that seem very rationalist, and which changed the world in (I would argue) mostly positive ways. They may not have tried to use Bayesian reasoning, but their approaches were all recognizably rationalist in nature. They were generalists, who collaborated mostly out of pure intellectual curiosity rather than for money. Their interactions were characterized by lively, occasionally acrimonious debate, but always with the goal of understanding the world better.

In other words, it's not all skulls out there. I don't know what mark, if any, the modern "rationalist" community will leave on the world, or whether it'll even prove to be the most important rationalist community out there today. But I do think the idea of such a community is a good one.

Friday, April 07, 2017

Intuitionism vs. empiricism

A few weeks ago, I disagreed with a Russ Roberts post about empirical economics. I wrote that if you don't rely on empirical evidence or formal theory, you're going to just end up relying on intuition gleaned from half-remembered simplistic theory and non-careful evidence:
[One] option [for making policy predictions] is to rely on casual intuition - not really theory, but a sort of general gestalt idea about how the world works. If we're of a free-market sort of persuasion, our casual intuition would tell us that minimum wage is government interference in the economy, and that this is bound to turn out badly. Russ seems to be advocating for this... 
As I see it, the fourth option is by far the worst of the bunch. Theories can be wrong, stylized facts can be illusions, and empirical studies can lack external validity. But where does casual intuition even come from? It comes from a mix of half-remembered theory, half-remembered stylized facts, received wisdom, personal anecdotal experience, and political ideology. In other words, it's a combination of A) low-quality, adulterated versions of the other approaches, and B) motivated reasoning.  
If we care about accurate predictions, motivated reasoning is our enemy. And why use low-quality, adulterated versions of theory and empirics when you can use the real things?
In a recent episode of EconTalk, Russ demonstrates this "intuitionist" approach, as applied to the question of the minimum wage. Russ is interviewing Andrew Gelman about problems with statistics in empirical research. Andrew explains things like p-hacking and the "garden of forking paths" (his colorful term for data-dependent testing decisions), which make reported confidence bands smaller than they should be and thus lead to a bunch of false positives.

Russ uses Andrew's explanations as reason to be even more skeptical about empirical results that don't match his intuition. Because I can't find an official transcript, here is an unofficial transcript I just made of the relevant portion (Disclaimer: I don't know the standard rules for making transcripts of things; I edited out pauses and other minor stuff):
ROBERTS: But let's take the minimum wage. Does an increase in the minimum wage affect employment, job opportunities for low-skilled workers - a hugely important issue, it's a big, real question. And there's a lot of smart people on both sides of this issue who disagree, and who have empirical work to show that they're right, and you're wrong. And each side feels smug that its studies are the good studies. And I reject your claim, that I have to accept that it's true or not true. I mean, I'm not sure which...Where do I go, there? I don't know what to do! I mean, I do know what to do, which is, I'm going to rely on something other than the latest statistical analysis, because I know it's noisy, and full of problems, and probably has been p-hacked. I am going to rely on basic economic logic, the incentives that I've seen work over and over and over again, and at my level of empirical evidence that the minimum wage isn't good for low-income people is the fact that firms ship jobs overseas to save money, they put in automation to save money, and I assume that when you impose a minimum wage they're going to find ways to save money there too. So it's not a made-up religious view, I have evidence for it, but it's not statistical. So, what do I do there?
I think several things are noteworthy about Russ' monologue here.

First, Russ claims that minimum wage studies are "full of problems" and have "probably been p-hacked". As far as I can tell, he doesn't have any evidence for this, or can even name what these problems are. He doesn't seem to know, or even consider, how precise the authors' reported confidence intervals in any of these studies even are in the first place - did they report p-values of 0.048, or 0.0001?

As for p-hacking and data-dependent testing, the basic test of a minimum wage hike's effect on employment is pretty universal and is known and decided upon before the data comes in (including things like subsamples and controls). So while some analyses in any minimum wage study are vulnerable to p-hacking, the basic test of employment isn't really that vulnerable.

So Russ seems to have interpreted Andrew to mean that all empirical studies are p-hacked, and are therefore all unreliable. Did Andrew want to convey a message of "Don't pay attention to data, because all hypothesis tests are irrevocably compromised by p-hacking and data-dependent testing, so just go with your intuition"? I doubt that's the message Andrew wanted to convey.

Second, Russ' preferred method of analysis is exactly the "intuitionism" I described above. Russ states his intuition thus: Because companies do lots of things to try to lower costs, minimum wage must be bad for low-income people. But that intuition is not nearly as rich as even a very simple Econ 101 supply-and-demand theory. In the simple theory, if the elasticities of labor supply and demand are low, the minimum wage has only a very small negative effect on employment - consistent with the empirical consensus.

And if you model a firm, you find that there are a number of ways companies can save money in response to a minimum wage increase other than firing workers. They can raise prices. They can reduce markups. They can reduce the salaries of higher-paid workers. There are all kinds of margins for cost minimization not included in Russ' intuition, but which could explain the empirical consensus.

Also, a simple monopsony model shows that cost minimization can make a minimum wage raise employment rather than lower it. A monopsonistic company lowers costs by paying workers less and producing an inefficiently small quantity. Minimum wages, by raising costs, actually increase efficiency and raise employment in that simple model, which would also explain the empirical consensus.

But Russ' intuition doesn't include simple monopsony models, multiple margins of cost reduction, or inelastic supply and demand curves. The intuition is just one piece of a theory. That's why I think intuitionism, as a method for understanding the world, is strictly dominated by a combination of formal theory and empirical evidence.

Finally, by declaring empirical economics to be useless, Russ is condemning the majority of modern economics research. I'm not sure if Russ realizes this, but empirical, statistical research is most of what economists actually do nowadays:

Theory papers were down to less than 30 percent of top journal papers in 2011, and the trend has probably continued since then. By dismissing empirics, Russ is dismissing most of what's in the AER and the QJE. He's dismissing most of the work that economics professors are doing, day in and day out. He may not realize it, but he's claiming that the field has turned en masse toward bankrupt, useless endeavors.

That is a bold, aggressive claim. It's a much stronger indictment of the economics profession than anything ever written by Paul Krugman, or Paul Romer, or Brad DeLong, or any of those folks. I don't know if Russ realizes this.


Russ responds in the comments. He notes that a slightly abridged transcript of the Gelman interview can be found at this link

David Childers notes in the comments that correcting for publication bias generally results in smaller estimates of the disemployment effects of minimum wage.

Tuesday, April 04, 2017

Are the rationals dense?

There's an interesting food-fight between some of the GMU bloggers and the "rationality community". Tyler Cowen called the community a "religion" (not necessarily an insult, but probably not something rationalists like to be called), and chided the community for depicting itself as having an "objective vantage point". Julia Galef defended the community, saying it's more likely than others to question its own way of thinking. Bryan Caplan criticized the community for being too committed to utilitarianism, and for spending too much time thinking about sci-fi scenarios like A.I. and brain emulation.

I was surprised at this food-fight, since these folks normally get along quite well, and all read each other's stuff. But I can kind of see why the "rationality community" might rub some people the wrong way. Here are my hypotheses:

1. The name

The members of the "rationality community" call themselves that because they spend a lot of effort trying to be more rational. But to an outsider, the name sounds like a claim that the community has already attained perfect rationality. If that were true, it would imply that the community's conclusions should be treated with extra reverence, or even as received wisdom - kind of like we take physicists' word on physics results. In other words, the very name of the "rationality community" makes its members' pronouncements automatically feel a little like argument from authority, even if they don't intend them as such.

A lot of people bristle at the notion that anyone is telling them what to think, especially on philosophical matters (which many people think they can figure out for themselves). "Hey," the listener thinks, "I'm not part of your community, but I'm rational too, dammit!" Indeed, this is true - there are plenty of people who try very hard to think rationally, but have never even been to LessWrong.

Also, lots of people just aren't interested in A.I. risk, or effective altruism, or the other stuff people in the "rationality community" like to talk about. They might resent the implication that in order to be a "rational" person you have to care about these things.

Ironically, I'm not sure if the "rationality community" even gave itself that name in the first place, or wants to keep it, which is why I put it in quotes. Anyway, anger at a name seems to plague a lot of communities and movements, either unfairly (Black Lives Matter), fairly (the pro-life movement), or just plain ludicrously (the "reality-based community").

2. The "leaders"

For whatever reason, humans tend to look at any movement or community and pick out some individuals who seem to be the leaders. In the case of the "rationality community", those leaders are generally perceived as being Eliezer Yudkowsky, Scott Alexander, Julia Galef, and Robin Hanson. This is through no fault of their own - they never claimed leadership as far as I know. But human perception is what it is.

Every individual rubs some people the wrong way (except Noah Smith, obviously, who is beloved by all!). Yudkowsky probably strikes some people as a dreamer obsessed with sci-fi. Alexander has clashed with a number of Social Justice people, and generally tends to be a detractor of feminism. Hanson has on occasion gone out of his way to say things he knew would offend people. (I doubt Galef has pissed anyone off.)

So there is probably some personality-friction with these "leaders" of the community. Also, smart people tend to be a contentious bunch, not without their own egos.

3. The fans

Every community and every thinker has fans eager to go to bat for them in online arguments. I'm sure there are some Noahpinion fanbois and fangurlz out there waiting to pounce on my detractors. Anyway, these fans tend not to be a representative random sample of the population - Krugman fans will tend to be liberal, Rod Dreher fans will tend to be conservative Catholics, etc. "Rationality community" fans tend to be libertarian types, in my experience, and especially techno-libertarian types.

I like techno-libertarians (I live in Silicon Valley, don't I?). But not everyone does. Epithets like "techbro" and "brogrammer" are relatively common, They do tend to be men, I guess, though I've met about as many female LessWrong readers as male. Anyway, American society is an especially divided one, and dislike of the social groups from which the "rationality community" tends to be drawn will naturally predispose some people negatively toward the community.

Also, online interactions tend to be negative, contentious ones, meaning that people outside the "rationality community" will tend to have fights with its members. I can remember a time when I got into an argument with (I think?) Eliezer on Twitter, and I made some joke, and some LessWrong dude jumped in and yelled "Monkey dominance tactic!" at me. It made me laugh, because that was such a monkey dominance tactic. But if I were more prone to group attribution error, I would have thought "Fer chrissake, those LessWrongers are annoying!". Kind of like when someone cuts you off in traffic and you look to see what race and gender they are before deciding who to hate. ;-)

Anyway, those are my hypotheses for why a seemingly innocuous online community mostly populated by shoegazing nerds might annoy some people out there. It certainly doesn't annoy me, though.

Friday, March 31, 2017

Robuts takin' jerbs

One advantage of writing down models in math, even if you can't test them, is that you make the ideas concrete. For example, take a recent exchange between Ryan Avent and Paul Krugman. Avent is trying to explain robots could be taking jobs even while productivity is slowing down. Lots of people have said made some variant of the argument: "If robots are taking our jobs, how come productivity growth is so slow?" Here's Avent's theory:
The digital revolution has created an enormous rise in the amount of effective labour available to firms. It has created an abundance of labour... 
How does automation contribute to this abundance of labour?...[T]here’s a...straightforward and important way in which automation adds to abundance now. When a machine displaces a person, the person doesn’t immediately cease to be in the labour force... 
In some cases workers can transition easily from the job from which they’ve been displaced into another. But often that isn’t possible...Such workers find themselves competing for work with many other people with modest skill levels, and with technology: adding to the abundance of labour. 
[A]s the economy attempts to absorb lots of relatively undifferentiated labour, wages stagnate or fall. As wages fall it becomes economical to hire people for low productivity work. So employment in low productivity jobs expands, affecting aggregate productivity figures... 
Second, the abundance of labour, and downward pressure on wages, reduces the incentive to invest in new labour-saving technologies... 
Third, the abundance of labour destroys worker bargaining power... 
Fourth, low wages and a falling labour share lead to a misfiring macroeconomy... 
So there you are: continued high levels of employment with weak growth in wages and productivity is not evidence of disappointing technological progress; it is what you’d expect to see if technological progress were occurring rapidly in a world where thin safety nets mean that dropping out of the labour force leads to a life of poverty.
Some pieces of this I get, others I don't. Reduced bargaining power explains the wage stagnation piece, but doesn't explain how wage stagnation goes along with slow productivity growth.

The macroeconomic piece makes sense if you believe in some form of Verdoorn's Law, but it seems like it would be self-limiting - if faster technological progress led to downward pressure on wages which led to deeper recessions which led to slower productivity growth, then recessions would slow down robotization and allow wages to rise again (maybe that's what's happening now!).

The same is true of the long-term innovation argument. If fast technological progress in robots displaces workers, lowers wages, and reduces the incentive for more robot innovation, productivity will then slow and workers will catch a break.

The first part of the argument is the most interesting, but also the hardest to understand when written in words. Is this just Baumol Cost Disease? If so, why is it happening now when it didn't happen before?

Fortunately, Paul Krugman came in and formalized this part of Avent's argument. I'm not sure Krugman exactly captured what Avent was trying to say, but Krugman's model is simple and interesting in its own right. It's a powerful argument for the "models as simple formalized thought devices" approach to econ that I sometimes pooh-pooh.

Krugman's model is of an economy with 1 good and 2 production processes - one capital-intensive, one labor-intensive:

You can make stuff with more machines with technique A, or with more people with technique B.

Now suppose there's gradual progress in technique A, but not B - it gets cheaper to make things with a lot of robots, but not cheaper to make them with a lot of humans. As Krugman shows, this will lead to falling wages:

It will also lead to more workers going into the labor-intensive B sector, which is the kind of shift Avent seems to be talking about.

OK, so this explains how robots replacing humans could be consistent with both slow productivity growth and slow or negative wage growth, But what it doesn't (yet) explain is the change in the two trends - how faster robotization could have led to both slowing productivity growth and slowing wage growth at the same time. Note in Krugman's model that if technological progress in technique A speeds up, then wage growth slows down (actually, gets more negative), but overall productivity growth in the economy speeds up.

In other words, a productivity speedup in the "robot" sector can't cause overall economy-wide productivity growth to go down in this model. But what can? Answer: A slowdown in productivity growth in technique B.

Suppose productivity in A and B are growing at the same rate initially, so that the isoquant is just sliding in toward the origin. Wages and productivity are both going up. Then B suddenly stops getting better, but A keeps on getting better. Now economy-wide productivity slows down, and wage growth slows down or goes negative.

So in this model it wouldn't be better robots that made economy-wide productivity and wages go down. It would be a slowdown in the technologies that allowed humans to compete with the robots.

This would be hard to verify empirically, since identifying human-complementing productivity growth and human-substituting productivity growth will require some major theoretical assumptions. But just casually glancing at the data, we can see that there are other productivity divergences in the economy that might be roughly analogous. For example, durables TFP growth has increased faster than nondurables TFP growth since the early 70s:

So if we think durables production is a lot more capital-intensive than nondurables production (which includes a lot of services), this could be a sign that a slowdown in labor-intensive production processes is generally underway (maybe as a result of expensive energy? top-out of education? some government policy?).

Anyway, so I think in order to make Avent's thesis work in Krugman's model - in order to make robotization be the cause of both slowing productivity growth and of wage stagnation - there has to be a slowdown in non-robot technology going on.

Interestingly, while writing this, I thought of a way that the Avent thesis could be combined with the "industrial concentration" hypothesis of Autor et al. Autor et al. cast doubt on the "robotization" hypothesis by observing that labor's share of income isn't increasing within firms:
Since changes in relative factor prices tend to be similar across firms, lower relative equipment prices should lead to greater capital adoption and falling labor shares in all firms. In Autor et al. (2017) we find the opposite: the unweighted mean labor share across firms has not increased much since 1982. Thus, the average firm shows little decline in its labor share. To explain the decline in the aggregate labor share, one must study the reallocation of activity among heterogeneous firms toward firms with low and declining labor shares. 
Autor et al. suggest that a few "superstar" firms are increasingly dominating their industries, causing profits to rise even as increased monopoly power shrinks markets and reduces measured productivity.

So how could this be reconciled with the Avent "robotization" theory? Well, take a look at Krugman's model. And now imagine that the "superstar" firms are the firms that use technique A, while the laggard firms use B. Krugman's theory doesn't include monopolistic competition, but it's easy to imagine that A, the capital-intensive technique, might have economies of scale that make A-using firms bigger and fewer than B-using firms. So a slowdown in progress in B would lead to a shift of resources toward the A firms, causing increased industrial concentration and a lower overall labor share without affecting the labor share within each firm - exactly what Autor finds. And it would do this while also causing economy-wide productivity to slow down and wages to stagnate.

That's neat. But it still means that, fundamentally, a technology slowdown rather than a speedup would be the root cause of the economy's problems - not a rise of the robots, but a world where robots are the only thing still rising.

Wednesday, March 29, 2017

The blogs vs. Case-Deaton

Anne Case and Angus Deaton have a new paper on white mortality rates. This one is getting attacked a lot more than the 2015 one, though the findings and methods are basically the same -- increased death rates for U.S. whites, especially for the uneducated. Andrew Gelman is still on the case (we'll get to his critique later), but a number of other pundits have now joined in. Thanks in part to these critiques, it's rapidly becoming conventional-wisdom in some circles that the Case-Deaton result is bunk - one person even called the paper a "bogus report" and criticized me for "falling for" it.

But most of the critics have overstated their case pretty severely here. The Case-Deaton result is not bunk - it's a real and striking finding.

Harris and Geronimus' Critique

First, let's talk about the most popular critique - Malcolm Harris' post in the Pacific Standard. Josh Zumbrun of the Wall St. Journal had a good counter-takedown of this one on Twitter.

Harris notes that the Case-Deaton paper hasn't gone through peer review, but fails to note that the 2015 paper, which said basically the same thing, did go through peer review. 

Harris also takes issue with the labeling of non-college-graduates as "working class", but this is a journalistic convention - Case and Deaton themselves use the term "working class" twice in their paper, but only when talking about possible economic explanations for the mortality increase. At no point do they equate "working class" with an educational category; that is entirely something that writers and journalists (including myself) do. 

And personally speaking, who really constitutes the "working class" seems like one of those internecine Marxist debates best left in the 1970s. When I use the term to mean "people without a college degree", I specify that that's what I'm talking about. 

But Harris' central critique is that, according to him, Case and Deaton have ignored selection effects. Obviously, if more people graduate college, there's a composition effect on the ones who still don't graduate. If mortality goes down by education level, this composition effect (which Harris calls "lagged selection bias") will raise non-college mortality even if mortality rates aren't changing at all. There's a 2015 paper by John Bound et al. (which Case & Deaton cite) showing that once these selection effects are taken into account, there's "little evidence that survival probabilities declined dramatically" for the lowest education quartile. Harris heavily cites Arline Geronimus, one of Bound's co-authors, who makes a number of disparaging comments about Case & Deaton's papers.

Selection effects are very real (and John Bound is one of the best empirical economists out there). Attrition from the non-college group is important. But as Zumbrun points out, once you lump all white Americans together - which totally eliminates the education selection effect - the mortality increase remains. Just look at this graph from the 2015 paper:

USW is all U.S. whites age 45-54, independent of education. USH is U.S. Hispanics. The rest are other countries - France, Germany, the UK, Canada, Australia, and Sweden. An age-adjusted graph from the more recent paper (which takes into account aging within this group) looks the same.

This graph shows two striking things. First, around 2000, the trend for middle-aged U.S. whites - all educational groups combined - stopped going down and started going up. Bound et al. might not say that qualifies as a "dramatic" increase, but it is clearly an increase, and that fact has nothing to do with education selection effects.

Second, and even more importantly, the USW trend post-2000 is markedly, hugely different from the trends for all other countries AND all other racial groups (though U.S. blacks are not displayed on the graph for some reason, Case & Deaton report that their mortality rates have also been trending downward). 

This trend difference is far more important than the absolute change in mortality. It means that while good things are happening throughout the developed world that are causing mortality to fall for almost everyone, U.S. whites are being left out of this trend. Geronimus, Harris, and other critics spill a lot of ink over whether the USW trend is really upward or flat, but this distracts from the real issue, which is the big difference between U.S. whites and everyone else in the developed world. 

In my opinion, if Case & Deaton made a mistake, it was to put too much emphasis on education, and not enough emphasis on this stark difference in international and racial trends.

Gelman and Auerbach's Critique

OK, so let's get to Gelman's critique. Whereas Harris and Geronimus are essentially criticizing Case & Deaton for disaggregating the data too much by education level, Gelman & Auerbach are criticizing them for aggregating the data too much by race. Gelman & Auerbach have created an enormous database of mortality graphs for Americans of every conceivable combination of race, age, gender and geographical location. They find that there's considerable heterogeneity within U.S. whites - for some subgroups, mortality has been falling.

That's interesting and cool. But it doesn't invalidate the result, as Slate's overzealous headline writer seems to think. Disaggregating just gives us clues for where to look to explain the main result. 

Disaggregation also distracts from the key issue of trend comparison. Gelman & Auerbach, like Harris, and like Bound et al., focus on whether the U.S. white mortality trend - or the trend for this or that subgroup of U.S. whites - is rising, falling, or flat. But the key takeaway from Case & Deaton's research is that U.S. whites aren't sharing in the mortality decline that U.S. nonwhites and Europeans are all enjoying. Gelman & Auerbach don't display any data for European countries, and they don't display demographically matched U.S. whites and nonwhites on the same graphs, making it hard or impossible to do the kind of visual trend comparison that is so easy to do in the Case-Deaton graph above.

Ironically, at the end of their Slate article, Gelman & Auerbach make a statement that relies heavily on exactly the type of aggregation that they criticize Case & Deaton for doing:
Of course, there is one simple story the data does seem to confirm: Minorities still have significantly higher fatality rates than white Americans. But that’s not news.
That might or might not still be true if you add up all U.S. minorities. But for Hispanics, it's false, and has been false for many years. Here's a Case-Deaton's graph for one age cut that illustrates the point. Compare the line for "White non-Hispanics (all)" to the line for "Hispanics":

It's also not true for Asian-Americans. Both Asians and Hispanics have lower mortality than their white counterparts. So by saying "minorities still have significantly higher fatality rates than white Americans", Gelman & Auerbach are rather hilariously failing to disaggregate. (From a quick look at CDC data, they also might just be plain wrong).

Why All The Critiques?

So why are people tripping over themselves to launch attacks on Case & Deaton? It's pretty obviously politics. Here are some excerpts from Harris' post:
[Some] have suggested that the Dems have to renew their focus on white working-class men if they want to win. In this view, liberals have become distracted by so-called “identity” issues like feminism, Black Lives Matter, transgender bathroom access, and the musical Hamilton, thus alienating the underserved voters Donald Trump was then able to nab. Underlying this argument is a series of reports on the immiseration of the white working class and its members’ increasing tendency to die... 
[H]ow will this report be understood? I’d wager it’s something like the Brookings blog headline: “Working Class White Americans Are Now Dying in Middle Age at Faster Rates Than Minority Groups.” I asked Geronimus if that was, to her understanding, a true statement: “I think that’s misleading, I really do. Oh boy,” she laughs, “there’s so much wrong with that. That headline makes it sound like problems are worse for white Americans than black Americans.” The narrative is wrong, but it’s not the first time Geronimus has heard it since the election. The Case and Deaton paper, she says, fits conveniently in this story, and it’s one she fears Americans are primed to believe... 
In [Case and Deaton's] graphs, white lives literally count more, and black lives less. But whether in health, income, wealth, or educational attainment, American white privilege is still very much in effect, and no statistical tomfoolery can change that.
I'm not sure what "white lives literally count more" means - it seems to be a flagrant misuse of the word "literal". (Update: see Update 3 below for a fun discussion of a Case-Deaton graph)

In any case, it's clear that a lot of the eagerness to trash Case & Deaton's results comes from political reasons. 

Is focusing on increasing white mortality a way to preserve white privilege and ignore the problems of black Americans? Maybe. I don't know. Personally, I'd think results like Case & Deaton's would help convince white Americans that the problems of black America aren't due to any unique pathology of black culture, and that white and black Americans are in the same boat together. I'd think it would be a powerful corrective to the poisonous Republican narrative that only black Americans need government help. But again, I'm no expert on how data will get used to create narratives (and I'm not sure anyone is an expert on that). 

But anyway, the critiques of Case-Deaton are overdone. Maybe Case & Deaton should have focused less on disaggregating by education, and more on disaggregating by gender, age, and region. But those are quibbles. The main results are real and important.

Update 1 - More About Selection Effects

A couple people, after reading this post, asked me "OK, but did Case & Deaton ignore the selection effect, or not?". The answer is: They certainly didn't make any dumb mistakes. They intentionally ignored it, and they had their reasons. But it really doesn't matter, because I think the whole education issue is far less important than the key result.

Let me explain.

Case & Deaton obviously knew about the selection effect. They cited and explained Bound et al.'s paper, and they also mention the selection effect later in their own paper. But they intentionally don't try to take it into account.

Why not? Well, suppose you think of education as a drug that prevents mortality - kind of like statins. Case & Deaton seem to think of it this way. Now suppose that in 1990, 50% of people take statins, and in 2010, 75% of people take statins. And you observe that for the cross-section of people who don't take statins, mortality increased from 1990-2010.

There's obviously a selection effect here. The people who shifted from not taking statins to taking statins between 1990 and 2010 were probably richer than those who didn't make the shift - and, hence, likelier to have healthier lifestyles. So some of the increase in mortality among the "doesn't take statins" group will come from the changing composition of this shrinking group. That's the selection effect.

But does that mean you should correct for this bias by making sure your comparison groups represent 50 percent of the population in both 1990 and 2010? That's like what Bound et al. do. But that will mean putting some statin-takers in with the "no statins" people in 2010! Suppose you do this, and you find that once you do this, mortality for the "lower statin-taking" half of the population remains unchanged between 1990 and 2010. You say "Whew, once we control for selection effects, the lower-statin group didn't actually see a rise in mortality."

But a lot more of the people in that "lower" group take statins in 2010 than in 1990! In your effort to control for composition effects, you've forced yourself to ignore some of the beneficial effect of statins. If they hadn't started taking statins, their mortality probably would've gone up instead of staying constant!

Which method is right? Should you ignore composition effects, or partially ignore treatment effects? It depends on what you think is important, the treatment effect or the overall outcome. Bound et al. think the overall outcome is important, so they use percentiles. Case & Deaton think the treatment effect is important, so they use, education groups.

And what I'm saying in the post above is that I think neither of these things is really that important compared to the main finding, which is the trend comparison. I'm saying "Who cares? All those French and German and British and Canadian and Australian and Hispanic-American people over there are getting a lot fewer heart attacks, statins or no statins!" The trend comparison, I believe, is the big takeaway from the Case-Deaton paper, and the education issue is a bit of a sideshow.

Hopefully that clears that up.

Update 2 - Age Adjustment

In the comments, Andrew Gelman brings up age adjustment. I mentioned that earlier, but just so you can see, here are the age-adjusted and non-age-adjusted versions of the Case-Deaton mortality comparison charts:

Can you see a difference?? The red trend line, for U.S. whites, looks almost the same, while every other trend line still falls steadily and dramatically. 

 In fact, adjusting for age makes the comparison with European countries - i.e., what I think is the central result of the paper - even more stark. It also illustrates why obsessing over whether the red line goes a little bit up, a little bit down, or stays flat is a total distraction - the real point is that every other trend line goes strongly down. (Annoyingly, U.S. Hispanics aren't on the age-adjusted chart, but you can see it's not going to make much difference.)

Update 3 - Dual Y-Axis Graphs

Earlier, I made fun of Harris for saying that "white lives literally count more" than black lives in a Case-Deaton graph. But a commenter explained what Harris meant - he was complaining about dual y-axes. Here's the graph in question:

Harris is actually right - the y-axis for black mortality is much more compressed than the y-axis for white mortality. Because of dual y-axes, white lives do literally count for more. This is, indeed, a crappy graph.

BUT, it's a crappy graph because it massively understates the size of the black mortality decline. If the y-axes had the same scale, you'd see a blue line (for white mortality) that stayed just about flat, and a red line (for black mortality) that zoomed downward. In other words, making white lives count the same as black lives on this graph would show black people doing a lot better than they were before, and white people not doing a lot better than they were before. That seems like exactly the message that Harris doesn't want to send. So I'm not quite sure why Harris is complaining. Anyway, the big takeaway from this graph is: Don't make dual y-axis graphs.

Saturday, March 25, 2017

Asian-American representation in Hollywood

With the casting controversies over the live-action Ghost in the Shell movie and the Marvel Netflix series Iron Fist, the outcry over "whitewashing" of Asian characters in American entertainment has reached a fever pitch. So I thought I'd write a post about that.

Why care about whitewashing?

Why do I, who am not Asian, care about whitewashing? Well, there's a not-so-important reason and a very important reason. The not-so-important reason is that I have a lot of Asian-American friends, and it pisses me off to see movies depicting an America in which they don't seem to exist. But that's very unimportant compared to the real issue, which is racial integration. 

Most of America's immigration now comes from Asia, meaning that the nation's future will be greatly affected by how well we integrate Asian-Americans into American culture and society. Keeping Asian-Americans invisible will cause non-Asian Americans to keep seeing them as perpetual foreigners and outsiders, while denying them representation in the mass media will make Asian-Americans themselves feel disaffected and anti-nationalistic.

To see what I mean, watch this short film by Chewy May and Jes Tom. A lack of Asian-American heroes on the silver screen has made many Asian-Americans feel that their country doesn't really consider them normal, mainstream citizens. That's unacceptable. 

Why changing Hollywood will be hard

If it were easy for popular outcries to change Hollywood whitewashing, it would have happened already. There must be some deep reasons it hasn't yet worked. 

One reason is that Asian people, being only about 6.5% of the U.S. population, are a small part of the American movie-going public. If everyone demands to see characters of their own race on screen, then movies directed at American audiences will feature mostly white, Hispanic and black people. Even if this same-race preference is only a slight one, it's probably enough to make many risk-averse studio execs shy away from putting Asian people on screen.

But the American audience is not the only important one anymore. The Chinese box office is increasingly crucial for U.S. film studios, especially in the face of the ongoing U.S. shift toward home viewing. And Chinese audiences may even more strongly prefer to see white people on the screen. Chinese moviegoers, used to seeing Chinese people in film, might view Hollywood as a chance to see exotic-looking white heroes. The Chinese-made movie Great Wall, starring Matt Damon, could be an indicator of this.

Finally, Hollywood studio execs, in addition to being a bunch of old racist white guys, might simply be stubborn and contrary. All the protesting and criticism may have just caused them to assert their control more strongly by doubling down on whitewashing. No one likes to be pushed around by angry bloggers and Twitter trolls if they can help it (believe me, as a blogger, I know). 

An alternative path

I've often criticized the Millennial generation (of which I'm technically a part, just barely) of relying too heavily on "appeals to liberal authority" as a way of bringing about change. Educated people of my generation and younger have grown up under more benevolent and more liberal institutions than anyone in America's past - public schools, universities, the Obama administration, the media, corporations trying to look good for the media, etc. When something about society is wrong, we instinctively appeal to authority for a redress of the injustice. We make demands on university administrations, Silicon Valley venture capitalists, big companies, Hollywood execs. And when there's no obvious power to appeal to, we call out injustices to society at large, imagining that there must be someone listening with the power to respond.

I'm not saying it's wrong or bad to complain about whitewashing on Twitter or The Verge or Kotaku. I just think this approach has serious practical limitations. The problem with appeals to liberal authority is that there won't always be a liberal authority to hear and respond. Often, the authority isn't as liberal as we would like to think. And often, authorities have less power than we implicitly assume. Yes, I realize this is a grumpy-old-man critique. But sometimes the grumpy old men are onto something.

Maybe there's a different way to end whitewashing and get Asian-American actors onto the screen. Maybe the answer is not to demand representation, but simply to seize it. Maybe the solution is for Asian-Americans, and also those non-Asian Americans who (like me) want to see more Asian-Americans on screen, to make and distribute movies themselves.

That sounds crazy, but it isn't actually crazy. Hear me out.

Hollywood is ripe for overthrow

The U.S. big-budget film industry is an industry in crisis. Ticket sales are in relentless decline. Revenues are up (which must be due to soaring ticket prices if sales are down), but profits are hurting. Hollywood has to spend more on marketing and expensive spectacle every year just to cajole an increasingly bored public to see its low-quality product. The studios have adopted an insanely risk-averse attitude, focusing almost entirely on sequels and remakes. Meanwhile, Americans are sensibly shifting to Netflix and Amazon and HBO streaming TV, because that's where all the quality is.

Meanwhile, it has never been cheaper to make a movie. I just bought a used camera for $1000. That camera, which can also shoot digital video, was one of the cameras used to film the IMAX movie Jerusalem, which won awards for its cinematography. One thousand dollars. And I bet if I had tried, I could have found the same model for cheaper. One of the top films at the 2015 Sundance Film Festival was shot on an iPhone.

Editing software is also cheap, and the price of high-quality computer graphics is falling relentlessly. This doesn't mean making a movie is cheap or easy, but it's a lot cheaper and easier than before. In 2014, the average independent film cost $750,000 to make. That's not peanuts, but for the price of one house in San Francisco you could make three indie films.

Moonlight, this year's Best Picture winner at the Oscars, was made for $1.5M and grossed $55M.

Get Out, by Jordan Peele, was made for $4.5M and has grossed over $140M so far.

As for distribution, this isn't nearly as big of a problem as you might think. With the rise of streaming, it's possible to create new video distribution channels (streaming services, or even entirely new business models people haven't thought of yet) much more easily than in decades past. Only a few can succeed, but those will succeed big.

Netflix and Amazon and Hulu are desperate for new content. TV is often a stepping stone to the movies, and is where all the quality is nowadays anyway.

And traditional channels for independent movies still exist - plenty of Hollywood directors and producers got their start from indie hits, and that will probably continue to be true. 

And capital costs in the United States (and much of the world) are near all-time lows. Bond rates are historically low, the stock market is at all-time highs, money is flowing out of China and looking for somewhere to go, and venture capitalists are pushing up the valuations of unicorns like Uber. More and more capital is chasing smaller and smaller returns. That doesn't mean capital is easy to get, but it means it's out there in large quantities. 

To sum up, we have just experienced technological revolutions in video production and distribution, at a time when capital costs are low and incumbents are vulnerable. It's time for some disruption.

Who will do it?

Obviously most people who care about Hollywood whitewashing have other careers to keep them occupied; most people aren't going to throw their other plans away and launch a quixotic quest to make movies with Asian leads. I'm not going to be a filmmaker, and most of you probably aren't either.

But a few of you might be. The entertainment industry is an exciting place to be right now. Here's a small anecdote just to illustrate. In the summer of 2009, just for fun, my friend Peter Chang and I went to make a documentary in Japan. We never finished it. But Peter realized how cheap it now was to make indie films for someone technologically savvy and artistically gifted (both of which he is), and he went on to start his own film production company, Golden Gate 3D. He's now shooting movies in Cuba and Greenland, and about to launch more projects. He's commercially successful, and works on the bleeding edge of filmmaking technology (which is one reason he's successful). Peter's interest is in documentary rather than narrative film (at least for now), but if he can do this sort of thing in the documentary space, other people can do it in the narrative film space. 

Peter's operation is still pretty small, and I single him out because he's my friend and because I got to see his success up close. The people making the immediate changes would be bigger, more established folks. There's no lack of Asian-American filmmakers out there. Justin Lin and Joseph Kahn are out there doing awesome stuff. And there's a rising tide of Asian-American acting talent. What's needed are for some of these or other filmmakers to turn into big-time film producers, and for entrepreneurs to start innovative new production and distribution companies.

What I think American entertainment needs is a Pro-Diversity Mafia. The PDM would be a loose network of funders, entrepreneurs, content creators and industry workers who share creative ideas, technology, funding leads, networks, and resources. It would include Asian-Americans, members of other "invisible" groups, and others who are supportive of greater diversity and inclusion in visual media. There are many examples of this sort of "mafia" allowing marginalized groups to break into an industry. Don't be ashamed of doing this sort of thing; this is how capitalism works, not the idealized frictionless market of an economist's model. (In fact, this "mafia" would help not just Asians, but other marginalized groups break into the visual media world - Muslims, for example. Intersectionality!)

If this type of thing shows signs of being successful, of course, Hollywood is going to want a piece of the action. If Asian-American actors are starring in surprise indie-hit films made on shoestring budgets and demonstrating eye-popping margins, it won't be long before the big dumb studios come calling. But by then, Asian-Americans will be able to negotiate from a position of strength.

But by then, pro-diversity filmmakers won't need Hollywood. With a disruptive business model in hand, the creators of the PDM could then simply muscle in on Hollywood's territory, going upmarket into big-budget films and beating the tired, boring sequel-mongers at their own game, stealing eyeballs and dollars with new distribution channels. Many times in history, a tight-knit subculture of highly talented people frozen out by discrimination has created a hotbed of creativity that eventually took over the industry that once shut them out.

Really? All this, just to put Asian-Americans on screen??

No, of course not. To do all this just to put Asian-Americans and other underrepresented groups on screen would be overkill. What's also at stake is a potential revitalization of American visual media. Movies are going down the tubes. They need new blood, new geniuses, new perspectives, and new business models. They need to be revitalized creatively, in the way that Lucas, Spielberg, Coppola and others revitalized them in the 1970s and 1980s. And they need to be revitalized technologically, in terms of both production and distribution. The people who are bold enough to put Asian-American actors on screen will also be bold enough to experiment and improve movies and TV in a huge number of other ways.

And the payoff to whoever does this won't just be making the world a better place. There's a lot of money to be made here, both on the production and distribution side. The disruption of Hollywood's old-economy oligopoly is a revolution that is long overdue, for more reasons than just this. And young Asian-American and pro-diversity entrepreneurs and artists have the smarts and the creativity to make that revolution happen and grab that pot of gold.

And you know what? I might be wrong about all this. Maybe none of this is necessary to get Asian-Americans on screen. Maybe the outcries and the Twitter trolling will work, and we're on the verge of seeing Asian-American actors headline superhero movies and big-budget Hollywood adaptations. And if that does happen, great. But then we wouldn't get a film renaissance out of the deal.

Fight injustice and make money

Capitalism is about taking what you can get. Until some imagined future day when we all live under the protective wing of an immortal, invincible, benevolent liberal authority, capitalism will have to do. It isn't fair, but it isn't the ossified hierarchy of power and injustice that its critics make it out to be, either.

The lack of Asian-Americans on the silver screen isn't just an injustice - it's the sign of an overlooked business opportunity. It is money being left on the table. Someone needs to pick up that money. And when someone does, whitewashing will soon be relegated to the history books - possibly along with Hollywood itself.

Update: As if the Universe read my post, Jordan Peele might direct the live-action Akira.