Pages

Monday, August 1, 2011

What's up with government spending?

The media does a horrible job of informing the public about economic issues. This is so because the public, for the most part, is unaware of basic macroeconomic models from which economic policies are derived. Phrases such as ‘government spending’ are mentioned without any explicit explanation of where such spending fits into the big picture. So instead of being able to form sound judgments about these things, people end up relying on vague and misunderstood principles that were haphazardly pulled from the ideasphere or handed down from parents. I’m referring to principles such as “Never raise taxes,” or “Government spending is never a good thing.” I’m yet to hear or read a good justification for these principles out of any mouth not informed of macroeconomic models. Let’s try to mitigate this.

Models such as the Classical model and the Keynesian model are good segues into more advanced economic theory, and are actually largely relevant in current debates. With brevity, we'll look at a Keynesian model and a few of its implications. Nota bene: this is by no measure a complete look at the Keynesian model. In fact, it is a brief clip from the simple Keynesian model. However, this is definitely sufficient for our purposes and the inferences for policy it provides are still sound.

The distinctive implication of the Keynesian model is that government spending can be a good thing - assuming that an economy in equilibrium is a part of the good. Before we look at this, we need to learn a little bit about some economic variables, the components we'll be using in our analysis and the symbols we'll see in our equations.

The big one is GDP, which is represented with Y. GDP (gross domestic product) is a measure of all currently produced final goods and services. It's the output of a nation. Currently produced because we're not concerned with the goods and services from 1899. Final goods and services because there are goods which are intermediate, as in they are used to produce other goods (such as the corn dedicated to feeding cows.) Counting these goods would be counting twice. In the Keynesian model, economists makes some assumptions which allow Y to be equated with over economics concepts. In our case, you'll need to know that Y can be equated with national income, the sum of all earnings from current production. GDP has some components.

Consumption, the spending people do every day, is the major component. This includes goods like food, cars, TVs, and haircuts. We represent consumption with, quite surprisingly, C. 

Another component is investment, which may be broken down into subcomponents. Investment largely refers to business fixed investment, the purchase of capital goods (stuff like factories and machinery.) Investment also refers to construction investment, and that's the purchase and building of new homes. Lastely, there is business inventory. These subcomponents won't be crucial for our discussion, but they are good to know of. Investment is represented with I. 

Our third component of GDP, and the focus of this post, is government spending. This type of spending refers to the purchases the government makes of goods and services. In other words, this component is very similar to consumption, except individuals aren't making the purchases - the government is. One thing to note, however, is that not all government spending goes into this component. Payments such as Social Security checks are not included. Government spending is represented by G.

Our final component is net exports. We say net exports because this entails actual exports minus imports. Exports refers to the foreign demand for goods and services. Imports refers to demand for the goods and services of other nations. Thus, we subtract imports in our analysis. Exports is represent by X, imports by Z. Now we can finally begin to look at the Keynesian model. 
First thing first, look at all of the variables we just defined. If you're astute, or just a good reader, you'll notice that C, I, G, X, and Z are all components of Y. We can write this as
1.      Y = C + I + G + X -Z

For now, however, we'll drop the X and Z and focus on a closed economy. Thus,
2.      Y = C + I + G

Our second equation here tells that every unit increase in any of the components yields a unit increase in output. Simple enough, right? Definitely. However, things do become a bit more complicated. In the Keynesian model, consumption can be described by a function.
3.      C = a + bYD

Uh oh, more variables! To understand this you first need to understand what disposable income is, the concept represented by YD. As mentioned before, GDP can be equated with national income, given certain assumptions. Well, disposable income is just whatever income is left after taxes. So,
4.      YD = bY – bT

where T represents taxes. We can plug this into our third equation to get
5.      C = a + bY – bT

But what do a and b mean? a is the amount of consumption that will still occur even if disposable income is equal to zero; it’s assumed to have a positive value. b gives the increase in consumer expenditure per unit increase in disposable income. The thought here is that the propensity to consume will increase as disposable income increases, but each additional unit of disposable income will not be followed by a unit increase in consumption (individuals also have the option to save their money.) This completes all speak of variables that will be necessary to discuss government spending. Our equation is as follows.
6.      Y = a + bY – bT + I + G

And with some simple algebra we ultimately come to our final equation.
7.      Y – bY = a – bT + I + G
8.      (1 – b) Y = a – bT + I + G
9.      Y = (1 / 1 – b) (a – bT + I + G)

Now our focus is on the right hand side of the equation, specifically on bT, I, and G. An assumption made in the Keynesian model is that I, business investment, is unstable. The reasons why aren’t necessary for this discussion. Just know that it was believed I would arbitrarily increase and decrease. Obviously, this could put the economy into disequilibrium. Thus, enter T and G. Let’s say that I decreases, so output decreases with it. The politician has two options to counteract this change: either increase government spending or cut taxes. This is where the value of government spending comes in. In many cases, increasing government spending may be preferable because of b, which has a value less than one. Given b, any unit change in taxes will have less of an effect than a unit change in government spending! This is not to say that cutting taxes will never be preferable, but if you accept the assumptions made by the Keynesians, then increases in government spending can be sound. 
  
This type of reasoning could justify the bailout packages from Bush and Obama. The economy went into recession - output decreased. In order to increase output, Y, the government started spending money. Very simple. Or, at least, very simple in this discussion. Of course, one shouldn’t just expect the economy to come out of a recession with an increase in G. Rather, we should look to see what happens afterwards. If an increase in G leads to an increase in output, then we have some evidence supporting the Keynesian model. If not, then we have problems.

There you have it: a justification for an increase in government spending. The only problem is that economists aren’t certain which macroeconomic model is correct, so we must take any inferences derived from these models with a grain of salt. Check back in the future, and I’ll post some material explaining more of the Keynesian model and other models, too. Knowing these models, even if it’s just the basics of them, gives you a sound base for judging political policies, sound-bytes in the media, and claims made by your ultra-conservative/liberal dad.

Saturday, April 23, 2011

The rationale for rationality, part II.

We last spoke of rationality by looking at economic rationality and specifically looking at a solution concept in game theory, which supposedly is adopted by rational agents in order to 'solve' a game. It turned out that economic rationality lead to a rather queer outcome which many would deem irrational. Perhaps this can allow us to say that economic rationality is necessary but far from sufficient in specifying the conditions of what is meant by 'rationality.' The game we looked at, the Prisoners' Dilemma, had a much more efficient outcome than the one achieved - maybe articulating our reasoning for why that outcome is rational could help us in providing a robust account of rationality.

The aspect of rationality that I am going to introduce is epistemic rationality, which means rationality pertaining to knowledge and beliefs. Typically in economics the notion of rationality doesn't need to be expanded to include epistemic rationality because economics is largely about efficiency and maximizing payoffs. That and describing epistemic rationality in math would be difficult, at least for me and my oft misunderstanding undergraduate mind! Regardless, I think it's a crucial notion which can't be ignored, especially in game theory, which is a sub-discipline of economics.

So, epistemic rationality deals with forming good judgments/beliefs (interchangeable terms) from a body of evidence. You can think of beliefs as a function of evidence actually. We're given evidence - facts about the physical world, memories, pre-held beliefs - and then we use rules of inference to form judgments. In your everyday intermediate-college-student economic analysis, this type of rationality just isn't necessary. However, in game theory I think that this notion of rationality is necessary.

In the Prisoners' Dilemma, we've seen how decision principles meant to fulfill the purpose of economic rationality can lead to irrational outcomes. This is also apparent in other games, such as centipede games. In those games, players have the option to take either one coin from a finite sum of coins or to take two coins, which ends the game. The players alternate turns. Economic rationality would have the first player take two coins immediately and thus end the game, and the solution concept employed here is backward induction.
A two-player game (players A and B) in extensive form
Backward induction entails ex ante reasoning here (before the game starts) and it goes like this. If I'm player 1 and my only concern is garnering the greatest payoff I can (as stipulated by economic rationality), then I should see that my greatest payoff will be from choosing to take two coins at the end of the game. However, I will also see that player 2 is aware of this, and player 2 doesn't want a pay off of '2' when she could get '3' on the preceding turn by taking two coins. Player 2 will preempt me. This reasoning carries back all the way to my first move, where I should supposedly take two coins to maximize my payoff. Now, be aware the centipede games usually include 101 coins total, so the difference between the payoffs at the end and beginning of the game are much greater.

Now, everybody who is anybody knows that taking two coins on the first move is a bad idea! If this were a longer game then anyone would agree that the players would 'cooperate' until the end and get much bigger payoffs! Thus, again, we have a disparity between economic rationality and the intuitively rational strategy. This is where I think that the need for epistemic rationality becomes explicitly clear.

If it's assumed, as it often is with the centipede game or Prisoners' Dilemma, that both players are perfectly rational, then an extension of the notion of rationality to include epistemic rules of inference will lead to an entirely different outcome which is in line with our intuitions. Look at it this way, both players want to maximize their payoffs, which is something a perfectly rational agent would do according to economists, and that seems pretty right. However, both players also have a common knowledge between them. This entails that both players know that the other is perfectly rational, the other knows every detail of the game, and all of that other mutha' jazz. Remember this and take the following into consideration. It has been argued that rules of inference lead ultimately to a strong thesis (dubbed the 'Uniqueness Thesis) which says that a given "body of evidence justifies at most one proposition out of a competing set of propositions" (Feldman 2007). In other words, this thesis claims that there is at most only one rational thing to believe given a certain set of evidence. Of course, perfectly rational agents in the centipede game have the same body of evidence, and if our notion of rationality is extended to include epistemic rationality, then each player will also come to the same conclusion according to the Uniqueness Thesis. Each player, knowing the other is perfectly rational (as stipulated by the assumption of common knowledge) will know that both players will then come to believe the same thing. Knowing this, each player will know that whatever he is thinking, the other will also be thinking the same. Thus, player 1 may either believe that the best strategy is to take two coins immediately or that he should cooperate until the end. Thanks to his well-justified belief that the other player will be thinking the same as he, then he should clearly one-coin it 'til the end because that will garner of a much greater payoff. Ta-da! Taking 'rational' to include epistemic rationality also leads to an outcome in line with our intuitions. Very similar reasoning can be applied to the Prisoners' Dilemma, which will then lead to the more efficient strategy profile of (Cooperate, Cooperate).

If this reasoning sounds good, then this gives good reason to think that both payoff maximization and adherence to rules of inference are necessary conditions for being rational. The question now is if the conjunction of these is also sufficient, and that will be something best left for another post!

Wednesday, April 20, 2011

Nash, Nash, Nash!

We've already seen a bit about Nash equilibria, but I think it would do anyone well to learn a little bit more about the concept and more importantly about game theory.

Game theory, itself, is an incredible discipline. Like any science, it is best done by describing the world with math - something which must be a part of the heroic in man. There is nothing greater than seeing certain aspects of the world being represented in mathematical generalizations! To go from particular observations and instances to equations which explain the general, underlying principles of reality is very truly astounding. Game theory seeks to model one specific bit of reality: strategic decisions people must make, given the decisions of other people.

Let's look at a simple game. Every game has certain elements: a set of players, a set of strategies for each player, the information available to each player throughout the game, and payoff functions which show the players' preferences over outcomes. Of course, this information can be represented in several ways. Two common ways are known as the 'extensive form' and the 'normal form.' We'll look at the normal form here. Now, consider the following game.

We have two players, player 1 has strategies (X, Y) and player 2 has (A, B). This nifty bimtarix also shows their preferences over outcomes, with player 1's utility listed first.
The Stag Hunt
This is the normal form in a bimatrix. A few implications come with formalizing a game like this. The most important is that the players are making their decisions simultaneously, and in this case each player knows that this is the game, that the other player knows this, etc. The numbers represent payoffs, and the higher the better. Now! What would you say is player 1's best strategy? If you're thinking that the thought of a big, juicy nine means that player 1 should choose X, then that's good reasoning. Of course, we also must consider what player 2 might do. It's explicitly obvious that player 2 would also be enticed by the nine, and so player 1 would do well in choosing X. In fact, this is the Nash equilibrium and is what perfectly rational players would do. Interestingly enough however, people actually choose to perform either Y or B in reality. This really isn't economically rational, but the reasoning is that people aren't really 'Econs' (perfectly rational agents). Thus, other people may not see that X and A are best, and so people will then guarantee themselves a good ol' eight with Y or B. Of course, this outcome is inefficient. Thus, we have a disparity between economic rationality and our heuristic human sense of rationality. Fascinating!

A quick point: you should check to see if the strategy profile (X, A) really is the Nash equilibrium. Ask yourself, is X the best strategy, given that player 2 plays A? And then ask the inverse from player 2's point of view. You should conclude that, yes, X and A are the best responses, given the other player's strategy. I mention this because a crucial insight about the NE is that such equilibria are composed of best responses. This is where the concept gets its normative force (or its cutting power.) Think about it. I had to.

Anyway, that game is actually known as the 'stag hunt game', and is actually one of many famous little games meant to model common social situations. X and A are usually stipulated as 'Hunt stag' and Y and B are 'Hunt hare.' Here are a few more common games that you should get to know.

A Game of Douchebags
This is the game of chicken. Everybody has seen this in some lame movie. Two jerks race toward each other, trying to use their bravado to make the other person swerve. Here, X and A are 'stay the course', Y and B are 'swerve.' Clearly, the profile (X, A) means death for both and thus each get a payoff of 0. If one swerves and the other stays then one will get a payoff of three while the other is glad to be alive but embarrassed. and gets a payoff of 1. Should both swerve then the embarrassment won't be as strong and thus each gets two. This game has more worth than a mere description of two jerk-faces showing off. It can also be used in evolutionary game theory (well, most games can be), in which the players don't make decisions but are instead strategies which are pitted against each other. X and A would then be 'Hawk' and Y and B would be 'Dove'. If two Hawks meet, then they both lose due to a horrid fight. If a Hawk and Dove meet, then the Dove loses out but stays alive, and the Hawk gets some territory or something. If two Doves meet, then all is well.
Married... with Decisions!

Here we have a battle of the sexes. A wife and husband want to spend time with each other at an evening event, but a lack of cell phone service means they'll have to guess which event the other is going to. X and A are 'football' and Y and B are 'ballet.' Obviously, player 1 is then the man.
The Prisoners' Dilemma





We've already seen the Prisoners' Dilemma, but there's more to the game than my last exposition gave. The NE solution concept has already been spoken of, but there's another concept here which can be used. Notice how Y is always better than X. If a strategy is always better than all other strategies, then that strategy strictly dominates. If a strategy is sometimes better and never worse than other strategies, then that strategy weakly dominates. In this case, Y and B are both strictly dominating strategies, meaning that they are always better than the other strategy. In this way, Y and B are always best responses, and as we know, Nash equilibria are composed of best responses. Hopefully that gave you an "Aha!" moment, like it did to me. If not, I may just be special

The Prisoners' Dilemma actually provides a significant justification for having a state. The dilemma actually nearly models the situation of public goods (the zeros inaccurately represent payoffs here), things which make everyone in society better off. Think of clean air, roads, tornado warning systems, and even justice. We can think of the players in the game as the individual and everyone else. Let's look at the case of clean air. The individual and everyone can cooperate and ride their bikes to work. This leads to some pretty clean air. However, the individual sees that the air would still be nice 'n' fresh even if they chose to drive their Hummer powered by sliced dolphin to work, and so they may choose to 'Defect' (Y and B) instead of 'Cooperate' (X and A.) Of course, everyone else may adopt this reasoning also, and thus everyone will choose to defect. This leads to the suboptimal outcome of (Y, B), which is air that burns your scalp off. Of course, if a state exists, then it may impose some punishments if people choose this outcome. Marvel!

From the Prisoners' Dilemma we can glean another illuminating inference. For the most part, everyone chooses 'Cooperate.' The social norm is to cooperate, in fact. However, in the case of the Prisoners' Dilemma, this is oft because the state adds costs to defecting, so cooperating becomes preferable (and the Nash equilibrium.) Generally, social norms can be explained in terms of Nash equilibria. Take driving. When I go out to drive, I am confronted with the choice of driving on the left or right side of the street. Now, I've passed the rigorous driving test of the Arizona DMV, so I can tell you a bit about the issue. Driving on the right side of the street is my best response, given that everyone else does so also. The same can be said for all other drivers. Thus, the norm has been explained by the Nash equilibrium concept. Societies settle on Nash equilibria in various situations, including moral ones.

Now you have bared witness to the awesome power of game theory. We have considered cooperation, jerkface bravado, and an argument for the state all by looking at some small, unimposing bimatrices. If you're interested in learning more, then I wholeheartedly recommend this book. We'll come back to game theory from time to time, but this will suffice for now!

Sunday, April 17, 2011

Can we rationalize rationality?

We like to tell ourselves that rationality is what sets us from the animals. Don't get me wrong, we think that with good reason. Nuclear physics is pretty difficult, and hey, so is balancing your budget (well, ask Congress about that one.) Animals definitely don't do these things, or at least not when we're watching. However, these are specific examples of what makes us human, as opposed to lesser animals. The meaning of the more general property 'rationality' isn't quite captured by naming off a few of the things we do (and not all of us do well, either.) A brief look at precisely what rationality may be is merited, but first consider the following.

If I have one grain of sand, I definitely don't have a heap of sand. If I have two grains of sand, I still don't have a heap. Should I come across a third and quite possibly a fourth grain of sand, a heap still wouldn't lay before me. You can see where the reasoning is leading. The thought is that there will be no definitive grain N which allows us to yell "HEAP!" Why would there be? If there were, then why is it that specific grain? Why not the one before it? Shouldn't the N-1 grain still give us something that looks like a heap? But if something looks like a heap then isn't just a heap? A possible answer to these questions is that maybe 'heap' is a vague term - and perhaps the same could be said about rationality. Now, this is tantamount to saying "Oh, I dunno..." which is like admitting that you're the type of frat bro who doesn't know where the campus library is. Clearly not a satisfactory answer. Precise answers with necessary and sufficient conditions are always preferable to "Well, it's vague."

Game theorists and economists give a precise answer as to what rationality may be. They hold rationality to be fulfillment of preferences. This often means that a person is rational if (and only if) they maximize utility. Intuitively this sounds pretty good, but for reasons soon to be clear this analysis of rationality fails

The 'Prisoners' Dilemma' game is probably the most famous game, and any student of political science, philosophy, economics, math, biology, psychology, or whatever should know about this game. It's that important. Probably its most useful application is as a model for public goods, but that's a story for another time. For now, let's see what we can say about rationality with this game. We'll be assuming that both players are perfectly rational and there is a common knowledge between them.
Represented in this bimatrix, each player is faced with a dilemma: cooperate or defect. Why these strategies are referred to as such isn't really important; there's a little story behind it - just Google that. Anyway, what is important are the numbers. Player 1's utility is listed first, Player 2's second. The higher the number the better. Now let's do a little thinking. What would a perfectly rational player do? Well, let me tell you! A perfectly rational player would use what game theorists call 'solution concepts.' These are principles of decision which are meant to lead decision-makers to their best possible strategies. Solution concepts provide the cutting power in decision theory; they eliminate the bad strategies and leave you with the good ones. The solution concept I'll employ here is the one which this game is famous for: the Nash equilibrium. Such an equilibrium is actually a vector of strategies (aka, a strategy profile), which is  (Player 1's strategy, Player 2's strategy). So an example would be (Defect, Cooperate), leading to a payoff of 10 for Player 1 and 0 for Player 2. Now, a Nash equilibrium is a special type of strategy profile, namely one which is composed of each player's best response, given the other player's strategy. In other words, it's a strategy profile which neither player can profitably deviate from with a unilateral change in strategy. Take a look at the bimatrix and see if you can determine which strategy profile is the NE.

If you're clever, which you must be since you're reading this blog (excuse the self praise), you'll see that (Defect, Defect) is the NE and can skip this paragraph. If you're a bit dull, then read on! So the concept is actually quite simple. First, note that the payoffs are symmetrical, so the reasoning for Player 1 will also apply for Player 2. So, as Player 1, we ask ourselves what our best move is, given Player 2's strategy of (Cooperate). Just look at the payoffs that we could garner: 6 or 10. (Defect) gives us 10, so our best strategy given that Player 2 cooperates is to defect. Though, this isn't an NE. Obviously, this strategy profile can't be Player 2's best response considering this their payoff would be 0. Now we ask ourselves what our best response is given that Player 2 chooses (Defect). Again, choosing to defect gives us our highest payoff, and as it turns out the same could be said for Player 2. (Defect) is Player 2's best response, given that we defect. Aha! The Nash equilibrium!

Of course, the NE in the Prisoners' Dilemma wouldn't seem to be rational. Clearly the profile (Cooperate, Cooperate) leads to a much better outcome. The point however is that we've used a game theoretic solution concept, the supposed tool of a perfectly rational agent, which contradicts the intuitively rational insight. Thus, the Prisoners' Dilemma models a situation in which rational actions achieve the irrational outcome. For this reason it's been suggested that this is a counterexample to the economic notion of rationality.

Now, maximization of utility definitely seems to be a big part of the rational landscape, but it's 'just the tip of an iceberg' (do these metaphors work together? can you can have icebergs in a landscape?) Perhaps this understanding of the concept is merely underdeveloped. I'll be looking into this over the next few posts. I'll be musing on what the conditions of rationality could be while also attempting to sound as if I know what I'm talking about. I'll even be able to relate this back to the epistemology of disagreement, which is good because I said I would write more about that. Now that's efficiency!

Friday, April 15, 2011

What deductive reasoning can do for you!

Perhaps the greatest misfortune of philosophy is that most see it as an artistic endeavor in which an author spouts as much bullshit about the human condition as possible. Sentences which leave the mind feeling lost, perplexed, and even violated are the perceived norm. In other words, philosophy is seen as something this guy would banter on about.
Hipsterdom is a celebration of being pretentious.
The good news is that this really isn't the case. Philosophy comes in all forms, from useful to pointless, good to bad, inane to brilliant!  The good, practical, and brilliant philosophy is precisely my purpose here.

Good philosophy really isn't about anything. Well, it is about something, but that's not the point. Think of the enterprise more as a mental gym. Entering the philosophical arena provides the means for creating a strong and robust mind that can cut through any bullshit with the greatest conciseness. Good philosophy is really just the most powerful methods of thought, and it just so happens that academic Western philosophy is this applied to things such as metaphysics, politics, ethics, and the like. Creativity is required, along with skepticism and the various tools which come with it. Good philosophy is casting doubt on any argument when possible because that is how you come to truth.

The first tool in any philosopher's box is deductive reasoning. Think of this as a tool used for its cutting power. The aggregate of ideas floating in the thinkspace is daunting, but of course not all ideas are worth their weight in salt. Deductive reasoning helps slash through the bad ones almost immediately. The question is how.

Let's suppose that my friend proposes a new thought to me, but I'm skeptical. Something is wrong with her thought, but I'm not certain what. I ask for the rationale behind the thought. If I've chosen my friends well, then I'll get an argument, a set of premises with at least one conclusion, in response. Suppose it went something like this.
  1. If it's raining outside, then we should go out to dance.
  2. It's not raining outside.
  3. Thus, we shouldn't go outside to dance.
When we speak like normal people, we usually tell our conclusion first and then give the premises as to why our conclusion is true. When we do analysis in philosophy however, it's more useful to arrange the matter like this, with the premises listed and the conclusion last (with a word to demarcate it, too.) Now, hopefully my friend's argument has struck as a bit queer. Certainly the fact that it isn't raining doesn't necessarily mean that we shouldn't dance. Right? People in Tucson, Arizona would quite dull lives if that were the case. Actually, that's generally true (partially why I'm writing this blog.) The major point here is that there are other reasons we should go outside to dance. Also, the key word here is 'necessarily.' Deductive reasoning is all about things which we can't conceive the negation of, as Hume put it. In this case we definitely can conceive of the negation. Or, in other words, if the premises are true, then they do not guarantee the truth of the conclusion. This is the notion of validity and is one of the most important concepts you can learn, this is the concept with a lot of cutting power.

A valid argument is one with premises which guarantee the truth of its conclusion, if the premises are true themselves. An invalid argument is one with premises that fail to give this type of support. Emphasis on "if the premises are true." When checking for validity, we don't really care if the premises are actually true, that comes later. Instead, we can just pretend that they are true. In fact, we don't even really need to know what the premises say specifically. We can replace the details with symbols. My friend's argument can be represented like this.
  1. If P, then Q
  2. It is not the case that P
  3. Thus, it is not the case that Q
With P standing in place for 'it is raining outside' and Q for 'we should go out to dance.' And there you have it, the generalized form of my friend's argument. The same problem with the specific argument my friend gave applies here as well. Just because P isn't the case doesn't mean that the same has to be true of Q There could be other things that yield Q! Thus, this argument is invalid because the premises don't guarantee the truth of the conclusion. Now, of course not every idea in the thinkspace is going to be supported by arguments of this form. There are plenty others, some valid and some not. Here's a few valid forms, you would do well to become familiar with them.
  1. If P, then Q.
  2. P.
  3. Thus, Q.
  1. If P, then Q.
  2. Not Q.
  3. Thus, not P.
  1. Either P or Q.
  2. Not P
  3. Thus, Q.
  1. If P, then Q.
  2. If Q, then R.
  3. Thus, if P, then R.
And a couple invalid forms.
  1. If P, then Q.
  2. Q.
  3. Thus, P.
  1. Either P or Q.
  2. Q.
  3. Thus, not P.
So there you go. That's a bit about validity. It doesn't exactly get the blood flowing, but it's an important tool in crushing other ideas. If an argument turns out to be invalid, then not much else can be said. It's inferring too much and is just a bad argument. There is plenty more to be said about deductive reasoning, but this is a good start. Now, whenever you hear a politician speak, be sure to catch their conclusion and premises. See if you can formulate their argument as we did here. Check for validity. And above all else, remember that a day without valid arguments is like a day without sunshine - dark, hopeless, bleak, and probably attracts hipsters.

Thursday, April 14, 2011

This question may seem innocent...

But it's actually yields quite the philosophical prowess. The question is, what is the rational thing to believe, given that a person you judge to be an epistemic peer (someone with the same inferential skills and evidence) disagrees with you? Like I said, that sounds simple enough, but attempting to answer the question can instill a good bit of humility in a person. This post will be an exposition of the discussion and will draw heavily from an article by Adam Elga. I'm planning on writing a little series of posts applying the issues discussed here to certain beliefs we have, such as political beliefs (isn't that exciting!)

Alright, so first thing is first - we need to break down the question to understand what it's asking because it set out a very specific situation. First of all, the question asks, "What the rational thing is to believe...". Emphasis on 'rational' and 'believe'. The word 'rational' implies that we are going to be dealing with some normative rules ("you ought to...") of inference. What these rules are exactly, I'm not sure, but we'll see. 'Believe' on the other hand is more obvious, we'll be dealing with forming beliefs, and not acquiring knowledge or the like (as is seen in epistemology.) Okay, so far we have "What ought you to believe given 'X'". Simple!

The next part of the question is key, "given that a person you judge to be an epistemic peer disagrees with you?" An explanation will be easier to give if I talk about epistemic peers first. Don't worry about the word epistemic, it just tells us we're dealing with beliefs (in this instance).

As Elga lays out in his article, there are people we know as advisors, people who effect our beliefs given their own. There are those who are superior to us: experts and gurus. Experts are superior to us both in their inferential skills (belief forming skills) and evidence; we adopt their beliefs as our own. A good example is the weatherperson. Whatever she believes the weather will be like, I'll agree wholeheartedly. Why? She is schooled (I hope) in the methods of meteorology and has access to evidence I don't. Thus, I judge her to be an expert an defer to her judgments unconditionally (assuming her head is on straight). A guru, on the other hand,  is superior to us, but there is catch. We still defer to their beliefs entirely, but conditionally. We treat someone as a guru when we believe we have some evidence that they don't. Then, instead of adopting their judgment on a given issue right away, we ask ourselves what would they believe if they had our relevant information and defer to that opinion instead.

Then we have those who are equal to us, our epistemic peers. Roughly, this means that a person has the same inferential skills and evidence as we do. A good note on evidence: this means more than just whatever physical evidence we can collect. We're also talking about pre-held beliefs and such. That's all we really need to know regarding advisors, except that there are epistemic inferiors.
An epistemic inferior
You've probably realized something queer by now. Experts and gurus are easy to come by, most definitely, but what about peers? There can't possibly be an exact peer, most people will be off from us by a bit - either slightly superior or inferior to us, right? When I was learning about this and in subsequent discussions with friends about the matter, this is a key point. Sure, maybe there is no such thing as actual peers, and that makes this question pointless. Well, not quite! The question is asking if someone who is in fact a peer disagrees, but what if someone you judge to be a peer disagrees! This an important and simple difference, take a moment to understand it. Lastly, note that I emphasized the word "disagrees". I just want to make a point that the disagreement here is rational, and not found in the evidence or inferential skills because you judge the other to have similar enough evidence and skills!

Now we understand the question, and now to see how it's as powerful as I say it is! Elga provides the answer in terms of a dilemma: the rational thing is either to suspend judgment (basically say "I don't know"), give your view more weight, or give the peer's view more weight. After criticizing the latter two and defending and expanding on the equal-weight view, he concludes that the equal weight-view is what leads to rational belief in peer disagreement.

I'll spare you all but the gist of the criticisms. For the most part, the greatest criticism of the other views is that giving extra weight to either person's judgment is quite unjustified. The justification for doing so is what Elga refers to as 'bootstrapping'. The idea is that in an initial disagreement with a peer, you might give your judgment for weight simply because it's yours. This diminishes your judgment of your peer's standing slightly. In the next disagreement you are now more inclined to give your view extra weight because part of your evidence is that your peer is at least slightly worse at making judgments than you. Repeat this process with every disagreement you have, and eventually you'll no longer come to judge your peer as a peer - simply because you gave your view extra weight in the first place! That's not good reasoning. Good reasoning would demote your peer on epistemic grounds instead. All of this could be said, but in reverse, for giving your peer's judgment extra weight. This leaves us with the extra-weight view.

This view basically says that if a person you judge to be a peer disagrees with you on a matter, you should suspend judgment on the issue. Why? You should suspend judgment because you have no way of telling who has the correct judgment here, if either of you do! Was there bias involved? On whose part? Miscalculations? Who did it? Several questions like these give you plenty of reason to suspend judgment. Here's an important point though. Surely, you and your peer will discuss things over, maybe review evidence and reasoning and come to a single conclusion together. Obviously, you don't suspend judgment then. You only do so prior to figuring all of that stuff out! This is a point that people usually miss in discussion.

An answer I often get to the question is, "The rational thing to do would be to discuss evidence, experiment, etc." Notice how the reply is often "..thing to do." That is the rational action to take, but not the rational thing to believe!


SO! Given that the rational thing to believe is actually a suspension of judgment, when a peer disagrees with you, prior to figuring out who reasoned incorrectly, I'll leave you with some questions.

  • There is a high probability that a person you would judge to be a peer exists and that this person disagrees with you on abortion (or any other issue). What should you believe in this case?
  • Our future-selves are often better informed than our present-selves (lest they're drunk). Given that and the equal weight-view, how should that effect our current beliefs?
  • Does the equal-weight view apply to all issues? What if you and your peer disagree on what "2+2" is?
Hopefully these questions will help you realize this stuff has actual implications for our everyday beliefs. I'll be referencing this material in the future, another good reason to know this stuff! Also, I'm attaching Elga's article.  Obviously, he explains things much more thoroughly than I do considering he is an expert on the subject (ie, an actual philosopher). It's a good read, so enjoy! 


Wednesday, April 13, 2011

Are people who drive big trucks assholes?


Anyone who knows me already knows my answer to this, and probably already knows that this will be a somewhat pretentious post. Oh no, I’m becoming what I hate! But let’s face it, plenty of actions (if not all) are subject to moral scrutiny – your driving behaviors included. In this post I’ll be applying utilitarianism and Kant’s Categorical Imperative to driving a big truck. Now I don’t mean any big ol’ truck. I mean the type of truck the good ol’ boys drive. I know some are used for work purposes. 18-wheelers come to mind and I’m sure plenty of people own a huge V-12 whatever for their small business. The people I’ll be referring to here are those who have a big truck for no reason other than they find some gratification in it. Such as this:
Learning from octopuses, the truck leaves a smokescreen in its trail for a quick escape.
So to start, we have to know what those two ethical theories I mentioned are. We’ll start with utilitarianism. This won’t be an exhaustive exposition of the theory, just a mere introduction for the purpose of this post. What’s the big idea in utilitarianism? Well, the big idea is to perform actions which bring about the greatest good, usually measured in utility. That sounds simple enough, but what is utility? It’s the satisfaction of preferences. Utility is similar to happiness, I would venture to say; you might even be able to say that it’s the promotion of someone’s or some group’s interest. Okay, so we want to bring about the greatest good measured in utility. This means that utilitarianism is a rather economic moral theory, we attribute a moral property (right, wrong) to an action based on how its cost-benefit analysis comes up. The goal then is to maximize the good by performing an action until the costs equal the benefits. Let’s apply this to driving a big truck. To do so, we’ll have to think of what the benefits and costs are of people owning big trucks.
Clearly, there is going to be some benefit to driving big trucks. People need to get to work, and their trucks get them there. That’s good for the economy. Yet, there are other means of commute – car pool, bike, bus, subway, and the like. This makes me consider the commuting benefit of big trucks as minimal. Another benefit is the utility an individual gets from driving a truck. I imagine the satisfaction from controlling a big, heavy thing is pretty high. Since there are a lot of big truck owners, and I imagine they’re all pretty happy with their trucks, I’ll place the benefit here as significant. I think it’s also safe to say that their spouses or partners get a kick out of seeing them be so manly with the trucks. The spouses don’t have as much at stake, but their utility counts also. I deem it somewhat significant. Big trucks can also pull heavy things, such as cars out of mud. In other words, they come in handy every now and then. Yet, that’s why we can rent tow-trucks, U-Hauls, and the like. A moderate amount of utility there. I think this exhausts all of the benefits unique to big trucks.
The costs are more plentiful however. The elephant in the room is pollution and gas consumption. Moving a two-ton vehicle requires energy, and a lot of it. Unfortunately, this means it requires a lot of fuel, which once processed in the truck’s engine turns into pollution (some at least). This is an explicitly significant negative externality which effects not only the truck owner’s contemporaries, but even future generations. The pollution effects the environment and the gas consumed is a valuable resource. That’s a shame really. Trucks can also impose negative externalities in others ways. For one, they’re big. Should I get into an accident with a truck, the damage to my car and the probability of my death is significantly higher than an accident with a sedan. However, I suppose this could be negated by the safety for the truck driver. The size issue doesn’t stop there however. There have been plenty occasions where my view of oncoming traffic has been blocked by a giant truck, whether it was making a left or right turn. That is horrifically dangerous! Other costs are less certain, so I won’t bother writing about them.
That’s an adequate look at costs and benefits of people driving big trucks. Needless to say, the costs certainly outweigh the benefits. However, not all hope is lost! Could it be morally permissible to drive a big truck under Kant’s Categorical Imperative?
Well you probably don’t know, because you probably don’t know what that is. The general idea is that the Categorical Imperative is a rule that applies to everyone at every time. It says that you ought to act in a certain way, otherwise your actions are immoral. Kant felt the need to share it with us through three different formulations, two of which are discussed the most in the philosophical community. One formulation reads, “Always act in such a way that you could will your maxims to be universal law.” Maxims? No, not scarcely-clad ladies in a magazine, but rather motivations – that’s what maxims are here. In other words, unlike utilitarianism, where the outcomes of our actions determined their moral culpability, our motivations will be doing that instead. The other formulation says, “Never use someone as a mere means to your own end.” The emphasis here is on “mere means”. Before we explain this, however, let’s look at the universal-law formulation.
With this formulation, you want to view every action as a person making an argument. Their argument is that everyone in the world should behave the way they’re behaving (or more technically, have the same motivations.) A good example of this is killing. Clearly, if everyone shared the same motivations in regards to killing, maybe, “I should kill whomever I please,” then the world would be in bad shape – who would be alive? Thus killing is justified as bad (in general) on Kantian grounds. Of course, a motivation such as “I should kill anyone trying to kill me” would be morally permissible. Now what about big trucks? What is the motivation behind driving a big truck? I’m not sure, I don’t drive one. My best guess would lead me suspect it includes manliness or ruggedness. After all, who could deny the testosterone rush after revving the engine and leaving that lame little sedan at the light? Regardless, we can universalize the action. Of course, should everyone drive a big truck, many of the costs mentioned before would be exacerbated greatly. Not good.
There is also the other formulation regarding not using others as a mere means. Kant believed that each person was intrinsically and unconditionally valuable, that each person had a certain dignity about them that ought to always be respected. Hence, even though he was a proponent of the death penalty, he was only approving of it if performed in an appropriate and dignified manner. Thus, using someone as a mere means violates this dignity, and thus the action in question would be immoral. I have a hard time seeing how this could apply to driving big trucks. I don’t think any person intends to use others by driving their trucks… The only way I could justify this one were if a truck driver were aware of the tragedy of the commons. The tragedy is that if each person acts in their own self-interest, society won’t be very well off at all. If everyone acts in the interest of the community, then society fares pretty well. Unfortunately, some people can free ride and act in their own self-interest while others don’t. They get away with it while society remains okay. Applied to the truck-scenario, we could imagine that a truck owner is aware of this: that the tragedy will be a polluted environment. However, he can get away with his Hummer if most other people drive sedans, car-pool, etc. In other words, he doesn’t work towards the good of the society and others do. In this way he may be using others – he uses the social responsibility of others to make an excuse for his own actions.
Actually, I think that’s a good justification.
Hopefully, I’ve convinced you that driving a big truck unnecessarily is immoral. In review, here is my argument:
  1. An action is moral if it is morally permissible by the lights of utilitarianism or Kantian ethics.
  2. Big truck driving is not permissible by utilitarianism.
  3. Nor is it permissible by either of Kant’s formulations of the Categorical Imperative.
  4. Thus, big truck driving is immoral.
In other words, driving a big truck is selfish and I think those who do are assholes.