This post comes from something a former student shared with me. The link they shared is here
The Bank of Canada’s policy target (currently 2% inflation as measured by the CPI) is up for review/renewal this year. This is a good time to look at other options. Let’s examine a few.
1% - this is just silly, the BoJ and Fed have been engaged in a huge experiment in monetary policy with zero or near zero interest rates. The results from the US are not in yet, but I’m not optimistic and there isn’t a lot to suggest it’s been successful in the Japan. I’m more likely to side with raising targets (a la Olivier Blanchard) rather than reducing them.
NGDP targeting – this is a little more interesting. The idea is the central bank targets a combination of price level and output (essentially P*Y), this is sort of what the Fed does in the US. A reasonable target for this might be something like 5 or so, given past growth rates and inflation.
Problem: what happens if you have a wildly good year? Deflation? Ouch. Say we get a year of growth 6% in real GDP, in order to meet the target we’d have to have deflation. If this were only a one way target, it’d be pretty pointless. So I don’t really see this working out.
Status Quo - We’ve got the U.K. which has overshot its inflation target yet again, despite poor performance in terms of output. Canada, on the other hand, has done quite well in terms of meeting its inflation target and having the needs of meeting the inflation target matching the needs of the real economy. In short when inflation has been below its target the real economy has generally been in need of stimulus.
Why is Canada different? I’m pretty sure it has to do with the mix of what we produce and what we consume, and the resulting impact on the exchange rate. Given that we import finished goods a drop in the exchange rate means an increase in the CPI. We tend to export raw materials and import finished goods. The U.K., on the other hand tends to export services (particularly financial services) and import finished goods. When the demand for the financial services they produce fell dramatically output followed suit. The drop in demand for the financial services (their key export) also caused the value of the pound to fall and inflation, as measured by changes in the CPI, rose. Result: The needs of inflation targeting are contrary to the needs of the real economy, increasing the interest rate would likely increase the value of the pound and decrease the inflation rate – but this would hurt exports and output (or at least not help). This makes sense; the U.K.’s export industry is exceptionally pro-cyclical with respect to the current economic crisis.
Canada is in a slightly different situation. While on the surface it looks a lot like what’s going on in the U.K., there might be something a little different about what drives Canada’s exchange rate. Global demand for the kind of raw materials Canada exports tends to be pretty stable. The stability of demand for our raw materials isn’t the only the reason why Canada’s different. When the price of raw materials falls, say due to a lack of demand, the price of finished products tends to fall too. Thus, a drop in the demand for Canadian exports tends to be matched with a drop in the price of the finished products we import. This means that unless the types of goods that are considered key inputs changes radically (this could happen if hydrogen or other alternate transportation fuels work out) Inflation targeting is a good match for Canada.
Of course this is based on a Keynesian interpretation of monetary policy, if you're a follower of Hayek, things are a little different.
Tuesday, March 15, 2011
Monday, March 7, 2011
A Dangerous Narrative
In my research I work mostly with raw statistics. That means data (generally secondary) based on observations of hundreds if not thousands of people. Thanks to desktop computing I have a myriad of ways analyzing, reducing, organizing, and presenting this data, all in the name of trying to find some consistent relationships between variables. All of these fancy techniques are designed to make sure I identify relationships between things that actually exist and can be used to predict what relationships will emerge when I’m using a different data set. Get this right and, ideally, we can predict what is going to happen before we collect the data. This is the whole point.
When I’m getting ready to lecture in a class, I’m engaged in an entirely different exercise. Most of the students in lower level classes that I run aren’t ready or able to deal with the kind of statistical arguments that the theories require. The symbolic logic that underpins the theories does no better. Instead, I go looking for stories – narratives. Students at lower levels (ie before they’re indoctrin… – I mean properly educated) tend to find narratives more convincing anyways.
So what’s the problem – students become convinced of the “right” things and I don’t have to figure out how to explain a probit or fixed effects panel model to first year students, wins all around, right?
Well, things get a little more complicated when you start to think about how people form expectations. From experimental work in both economics and behavioural psychology we can identify some of the consistent mistakes people make when forming their beliefs about how likely something is. Let’s start with why narratives work.
The effectiveness of a narrative is based on the fact that most people are able to project themselves into someone else’s position if they invest a little energy. This is easier, the more like you the person is seen to be. As a result the best narratives are those that feature people as close to how the audience sees themselves as possible. For those who want to explore this further consult Adam Smith’s Theory of Moral Sentiments.
OK, so we can make a story more compelling by choosing someone as like the audience as possible – big deal. This introduces to proximity. This is the idea that events you can relate to are seen as more likely than they actually are. If your friend’s house is robbed you’re more likely to worry about your house, even if you live across the city. Somebody you don’t know in the next neighborhood – not much effect. The more like you the person in the story seems, the bigger the impact on probability.
Here’s another way things can get weird, some of the some things that make for an interesting story generally screw with our perceptions. For example, more extreme events are more interesting and a lot easier to recall. This taps into what is called availability; the easier it is for you to recall an event the more likely you believe it to. A good narrative will increase availability in both these ways, even when the event is incredibly unlikely.
We also have to worry about issues like representativeness and conservatism. Representativeness just means we assign probabilities based on a prior belief and how well any new data represents the conditional event. If 85% of cars are blue and somebody who is wrong 20% of the time tells you a car is blue, you’ll generally go with an 80% chance the car was blue. This, for those who have studied stats, is dead wrong, the correct answer is 41%. Conservatism means you start with a prior and resist updating your beliefs by giving little weight to new data. So a good narrative can entrench an incorrect belief very easily.
We also have to deal with the so called, law of small numbers. The idea here is that a remarkably small sample should be representative of the population that generated the sample. This really comes out when you ask people to generate a small set of random numbers. The numbers they come up with tend to have negative autocorrelation (a big number is followed by a small one), which means the series isn’t random. What this means is that a small number of narratives are often assumed to be representative of an entire population, particularly if those few narrative agree. This just isn’t the case.
Don’t get me wrong, I’m not saying there is no place for narrative in research or teaching, they’re a great place to start, but a horrible place to stop. If all we consider are narratives, however, we’re going to get it wrong.
When I’m getting ready to lecture in a class, I’m engaged in an entirely different exercise. Most of the students in lower level classes that I run aren’t ready or able to deal with the kind of statistical arguments that the theories require. The symbolic logic that underpins the theories does no better. Instead, I go looking for stories – narratives. Students at lower levels (ie before they’re indoctrin… – I mean properly educated) tend to find narratives more convincing anyways.
So what’s the problem – students become convinced of the “right” things and I don’t have to figure out how to explain a probit or fixed effects panel model to first year students, wins all around, right?
Well, things get a little more complicated when you start to think about how people form expectations. From experimental work in both economics and behavioural psychology we can identify some of the consistent mistakes people make when forming their beliefs about how likely something is. Let’s start with why narratives work.
The effectiveness of a narrative is based on the fact that most people are able to project themselves into someone else’s position if they invest a little energy. This is easier, the more like you the person is seen to be. As a result the best narratives are those that feature people as close to how the audience sees themselves as possible. For those who want to explore this further consult Adam Smith’s Theory of Moral Sentiments.
OK, so we can make a story more compelling by choosing someone as like the audience as possible – big deal. This introduces to proximity. This is the idea that events you can relate to are seen as more likely than they actually are. If your friend’s house is robbed you’re more likely to worry about your house, even if you live across the city. Somebody you don’t know in the next neighborhood – not much effect. The more like you the person in the story seems, the bigger the impact on probability.
Here’s another way things can get weird, some of the some things that make for an interesting story generally screw with our perceptions. For example, more extreme events are more interesting and a lot easier to recall. This taps into what is called availability; the easier it is for you to recall an event the more likely you believe it to. A good narrative will increase availability in both these ways, even when the event is incredibly unlikely.
We also have to worry about issues like representativeness and conservatism. Representativeness just means we assign probabilities based on a prior belief and how well any new data represents the conditional event. If 85% of cars are blue and somebody who is wrong 20% of the time tells you a car is blue, you’ll generally go with an 80% chance the car was blue. This, for those who have studied stats, is dead wrong, the correct answer is 41%. Conservatism means you start with a prior and resist updating your beliefs by giving little weight to new data. So a good narrative can entrench an incorrect belief very easily.
We also have to deal with the so called, law of small numbers. The idea here is that a remarkably small sample should be representative of the population that generated the sample. This really comes out when you ask people to generate a small set of random numbers. The numbers they come up with tend to have negative autocorrelation (a big number is followed by a small one), which means the series isn’t random. What this means is that a small number of narratives are often assumed to be representative of an entire population, particularly if those few narrative agree. This just isn’t the case.
Don’t get me wrong, I’m not saying there is no place for narrative in research or teaching, they’re a great place to start, but a horrible place to stop. If all we consider are narratives, however, we’re going to get it wrong.
Labels:
behaviour,
inaccurate,
narrative,
probability
Subscribe to:
Posts (Atom)