Expanding on our fourth discussion, we'll now look at the kinds of moves—rhetorical or otherwise—that can be made when setting out or defending an idea and countering others. We'll also consider some common errors in reasoning that come up in philosophical arguments from time to time, like anywhere else. The purpose of this piece is to provide a toolbox of concepts to use or refer back to when reading through and evaluating pieces of philosophy.
Making an argument
Although often we make arguments to try to learn about and understand the world around us, sometimes we hope to persuade others of our ideas and convince them to try or believe them, just as they might want to do likewise with us. To achieve this we might use a good measure of rhetoric, knowingly or otherwise. The term itself dates back to Plato, who used it to differentiate philosophy from the kind of speech and writing that politicians and others used to persuade or influence opinion. Probably the most famous study of rhetoric was by Aristotle, Plato's pupil, and over the years philosophers have investigated it to try to discover the answer to questions like:
- What is the best (or most effective) way to persuade people of something?
- Is the most convincing argument also the best choice to make? Is there any link between the two?
- What are the ethical implications of rhetoric?
Although we might take a dim view of some of the attempts by contemporary politicians to talk their way out of difficult situations with verbal manouevrings that stretch the meaning of words beyond recognition, hoping we'll forget what the original question was, nevertheless there are times when we need to make a decision and get others to agree with it. Since we don't always have the luxury of sitting down to discuss matters, we might have to be less than philosophical in our arguments to get what we want. This use of rhetoric comes with the instructional manual for any relationship and is par for the course in discussions of the relative merits of sporting teams.
In a philosophical context, then, we need to bear in mind that arguments may be flawed and that rhetorical excesses can be used to make us overlook that fact. When trying to understand, strengthen or critique an idea, we can use a knowledge of common errors—deliberate or not—found in reasoning. We call these fallacies: arguments that come up frequently that go wrong in specific ways and are typically used to mislead someone into accepting a false conclusion (although sometimes they are just honest mistakes). Although fallacies were studied in the past and since, as was said previously, there has been something of a revival in recent times and today people speak of critical thinking, whereby we approach arguments and thinking in general in a critical fashion (hence the name), looking to evaluate steps in reasoning and test conclusions for ourselves. Hopefully this guide will help in a small way.
As we discussed above, some mistakes in reasoning occur often enough that we now have almost a catalogue of them to consider. Here we'll look at those that turn up in everyday situations, whether uttered by politicians hoping to win our votes or the guy at the bar selling his theory as to why his team lost again as a result of poor refereeing. We already looked at a sample in our fourth discussion, so some of the content should be familiar.
There are two kinds of fallacy: formal and informal. If we look back to the introduction to Logic, a formal fallacy is an argument wherein the structure reveals the flaw, while an informal fallacy is one wherein the structure may seem fine but the content is somewhere in error.
The plan of this treatment will be as follows:
- An example of the fallacy
- An explanation as to what's wrong
- Another example
- A more technical explanation, where possible
Hopefully by the end of the discussion these fallacies should be easier to spot and will probably be found all over the place. Although there is a certain amount of skill in noticing and countering them, they may also give us a grudging respect for those master rhetoricians who employ them with such cunning.
Argumentum ad hominem
This is a fallacy we studied before but it bears repeating, not least because it's perhaps the most frequently charged and least understood, in spite of its relative simplicity. Consider the following example:
You say that the conservatives' tax plans would leave the health service under-funded, but you're a liberal and would get rid of health care altogether.
Now, whether or not the characterization of the so-called liberal's beliefs is accurate (that question will be asked when we look at another fallacy to come), the point is that it isn't relevant: either the plans really will leave the health service under-funded or they won't (or, perhaps, the situation may be considerably more complex), but the political persuasion of the person making that criticism doesn't impact on the claim itself. That means that the complaint against the liberal is against him or her, not the claim; and that is what the Latin phrase means: an argument against the man (or woman—more accurately, "argument to the person"), rather than an actual counter-argument. In general, there are three kinds of ad hominem:
- Abusive—the person is attacked instead of their argument
- Circumstantial—the person's circumstances in making the argument are discussed instead of the argument itself
- Tu Quoque—the person is said to not practice what he or she preaches
Notice what the ad hominem is not: it doesn't say that the political beliefs of the liberal don't motivate his or her criticism in the first place, or that he or she wouldn't want to remove health care altogether (although it doesn't seem likely), but only that these things are not relevant to the point at issue. For this reason it is usually grouped as one of the fallacies of relevance. It also is not equivalent to an insult, as many people seem to suppose.
Consider now some other examples:
Some politicians claim we should raise taxes, but they are just greedy opportunists trying to gain more of our money to spend on themselves.
This is an ad hominem abusive, since it attacks a (perceived) quality of the claimant(s) instead of the claim itself. It has the form:
- P1: A claims B;
- P2: A is a C;
- C: Therefore, B is false.
You say we should lower taxes, but you are living beyond your means and so you would be expected to say that.
This is an ad hominem circumstantial, since it brings in the circumstances of the claimant when they are not relevant to the claim at issue (even if they might explain his or her interest). It has the form:
- P1: A claims B;
- P2: A is in circumstances C;
- C: Therefore, B is false.
You say people should learn to live within their means, but you are in debt yourself and make no effort to get out of it.
This is an ad hominem tu quoque, since it draws to our attention an inconsistency in the argument: if the claim is true, then the claimant should either change his or her ways or admit that the claim doesn't have to apply to everyone after all. It has the form:
- P1: A claims B;
- P2: A practices not-B;
- C: Therefore, B is inconsistent with A's actions.
Note that this differs from the first two examples in that they are instances of formal fallacies while the third is sometimes an acceptable move to make in any argument. Pointing out an inconsistency in someone's thinking does not show their position to be mistaken but it may show their advocacy of it to be hypocritical. If we change the form slightly, it becomes fallacious:
- P1: A claims B;
- P2: A practices not-B;
- C: Therefore, B is false.
That someone may be a hypocrite, of course, does not show their ideas to be false. The first form of tu quoque is fine but the latter is fallacious.
In summary, then, the ad hominem fallacy brings irrelevancies to a discussion and distracts from the real point at issue.
Argumentum ad populum
This is another instance of a fallacy of relevance. For example:
You say that raising taxes will result in better public services, but hardly anyone believes that these days.
The problem here is that the number of people believing in an idea has no impact on its truth. An interesting other example shows this nicely: a common presumption, it seems, is that people in the past almost universally believed the earth to be flat, while we now know that it isn't. The fact that so many people allegedly believed that it was flat didn't change the shape of the earth accordingly, and if someone in those days had asserted that "everyone says the earth is flat" in defence of that claim then we would say that this didn't make it so: no amount of belief in a false idea can make it true. The irony is that historical inquiry teaches us that this example is also false, even though plenty of people seem to believe it: the belief in a flat earth was not widespread and the studies of historians have overturned this myth, even though many still hold to it.
The general form is as follows:
- P1: A is claimed;
- P2: x many people believe that A is false, where x is large;
- C: Therefore, A is false.
Reading beyond this argument, we can see that there are hidden assumptions to do with the ability of people to determine the truth of such questions on their own. For example:
- P1: A is claimed;
- P2: A majority of people is able to judge questions outside their area of expertise or knowledge with a high degree of validity;
- P3: It is possible to accurately gauge the collective opinion of people on such matters;
- P4: x many people believe that A is false, where x is a majority;
- C: Therefore, A is false.
Even here there are still presuppositions that remain implicit and could be drawn out by further analysis. Appealing to the masses—which is what the Latin term means—is irrelevant to the truth or otherwise of the claim. There are more complicated examples we could consider, like this one:
You shouldn't use racist language because almost everyone thinks it's wrong to do so.
Here a normative moral claim ("you shouldn't use racist language") is justified by appealing to the number of people who agree with it. Is this an argumentum ad populum, though? As we saw in our discussion of ethics, some moral thinkers suggest that issues of right and wrong are decided by intersubjective agreement; in that case, the claim would actually read something like this:
- P1: Moral issues are decided by intersubjective agreement;
- P2: Intersubjective agreement suggests that racist language is wrong;
- C: Therefore, it is wrong to use racist language.
Put in this form, it seems like a reasonable argument to make. For those who disagree about intersubjective agreement, however, P1 would be disputed and the attempt to justify the conclusion by appealing to P2 would be regarded as fallacious.
A slightly different version of this fallacy is the appeal to tradition, where reference is made not to the number of people who hold a belief but the (alleged) fact that it has been believed for so long (or that the belief is an integral part of a society or culture) that to question it is folly. For example:
You say we should no longer have a nuclear deterrent, but there has always been war and always will be and we need to be able to defend ourselves.
Here the traditional belief among a significant number of people that war is a reality of life is used to justify a claim about defence requirements. However, this is not obvious and needs to be argued in turn; the fact (even if true) that people have always believed war to be an inevitability of life does not make it so, nor does the number of people who might believe it now or in the future. Once again, though, the matter is much more subtle: this could be a self-fulfilling prophecy, since if a majority of people feel war to be inevitable then they may be less likely to avoid it than those who are convinced just as surely that there is always a peaceful solution to any potential conflict. Appealing to tradition may be a reasonable thing to do if the tradition is true.
In summary, the argumentum ad populum uses numbers to support claims when an inductive justification is insufficient to prove them.
Argumentum ad vericundiam
This is a move in argument that may or may not be fallacious, depending on the circumstances. It means an appeal to authority, an example of which could be thus:
You say philosophy is important, but Professor X says it's a waste of time.
Here the speaker refers to the authority of the professor to counter the claim that philosophy is important. The problem is that the presumed authority may or may not be relevant: if the professor is (or was) a lifelong student of philosophy and decided after years working in the field that it really is a waste of time, then perhaps we should look into his reasons for saying so? On the other hand, if he is a professor of mineralogy, say, then—on the face of it—his opinion bears no more or less weight than anyone else's. It may be that additional factors are important: perhaps this professor has also studied philosophy or is known to us to be a particularly trustworthy and astute individual whose opinion we have come to value?
In short, appealing to authority where the authority does know (or is expected to know) what he or she is talking about is a legitimate move in argument, but when the authority's expertise is not relevant then it is fallacious—indeed, a fallacy of relevance, as before.
Matters are not always so clear-cut, though. Even if the authority in question really is an authority in the field, it may be that the question under consideration is one of much controversy among his or her fellow academics. In our example, other philosophy professors may be found who say that philosophy is important, so that appealing to authorities on one or other side or an argument does no more than appraise us of what they think. Take another instance:
Professor Y, a highly respected biologist at a prestigious university, says that the likelihood of live evolving on Mars is so small that, for practical purposes, we can assume it didn't; therefore spending money on searching for life on the red planet is a waste of valuable resources.
Here the implicit idea behind the criticism is that with only a finite amount of money to go around and other deserving causes in need of support, why should we support a quest that academics like Professor Y agree is very likely to fail? Is this argument fallacious? It depends: we would need to know more information, such as whether the professor is an expert in the appropriate area of biology and if there is any controversy among similar experts. If the professor's opinion is indicative of the relevant biological community, then perhaps this is information we should keep in mind when forming an opinion on the issue? On the other hand, if the professor is something of a maverick and the weight of biological opinion goes against him or her, then appealing to him or her as an authority could be seen as fallacious, distracting us from the point at issue. In general, we need to be careful in assessing the value of expert testimony, as well as its relevance.
Argumentum ad baculum
Consider the following argument:
You had better vote for an increase in taxes or the country will fall apart.
Here an appeal is being made to the consequences of not accepting the argument for raising taxes. The fallacy itself means an appeal to force (although here we consider also the argumentum ad consequentium, since they are so similar), and here the claimant is implying that a consideration of what will (allegedly) follow from not raising taxes ought to force us to accept the proposal. That means the general form is thus:
- P1: Not doing A will result in B;
- P2: B is undesirable;
- C: Therefore, we should do A.
The fallacy occurs when the threat is in fact not related to the proposed action; in this formation, that would be challenging P1. In our example, perhaps not increasing taxes really would lead directly to the country falling apart (whatever that means), but it isn't obvious. Indeed, it sounds more like a rhetorical tactic to discount all the alternatives. What we want to know is if P1 is true; if not, then the argument is fallacious.
Take another instance:
If you don't exercise your right to vote, extremist parties will take advantage of your apathy and gain more power. Is that what you want?
Here, once again, the force of the undesirable consequences is intended to make us accept the argument that we should vote. Is this fallacious, though? If we were to put it into syllogistic form, this time P1 would seem much more plausible. The important point is that the threat appealed to must be relevant to the issue at hand.
Argumentum ad misericordiam
This fallacy is concerned with an appeal to pity, usually for the circumstances of the claimant. Consider this example:
How can you reject my thesis? I worked on it for three years.
The problem here is that a bad idea is so whether the result of five minutes or five decades of effort; the fact that someone may have spent a great deal of time coming up with it says nothing at all about its truth or otherwise, so asking someone to take account of the particular factors that went into it and the disencouraging thought of so much time wasted is simply irrelevant. One way we could set this out is as follows:
- P1: If A is false, all the work put into it would have been wasted;
- P2: Wasted effort is to be avoided;
- C: Therefore, A is not false.
When we look at it this starkly, it seems obvious that the conclusion does not follow.
Now take another example:
How can we not donate aid to those countries less fortunate than our own?
Although this is close to another fallacy we'll consider later, we can see that here an appeal to pity (for the less fortunate countries) is intended to distract from the fact that there are other ways to help people, some or all of which may be better than donating aid. That some people may be in unfortunate circumstances does not imply that aid is the best way to help them, and indeed the fact that people elsewhere are in need of help is irrelevant to the question of whether aid is a good strategy, except insofar as it provides the problem in the first place. It may seem heartless to note this, but that is precisely what the appeal to pity intends to do: by hoping that we will want to avoid appearing overly concerned with the logic of argument instead of the people affected, the existence of alternatives is ignored.
In general, then, we once again have a fallacy of relevance.
Argumentum ad ignorantiam
The argument from ignorance usually involves assuming that something is true because it has not yet been proven false. For example:
You say that faeries don't exist, but you can't prove that they don't.
The implicit idea at work is that since the existence of faeries has (allegedly) not been disproved, it follows that they do exist. This is not relevant, however: that this disproof has not been forthcoming says nothing about actual existence or otherwise. Even if nothing disproving faeries ever comes about, this cannot form the basis of a proof of their reality.
To see some of the issues involved in the argument from ignorance, we can also look at a more involved example:
Evolution is false because it can't explain how life could evolve from non-life.
Here the assumption is made that for evolution to be a successful theory it must be able to explain how life itself came about in the first place; since it is supposed that no one can do this at the moment, it follows (allegedly) that evolution fails. We can try to put this in syllogistic form:
- P1: A successful explanation of life must be able to account for the development of life itself;
- P2: Evolutionary theory cannot do so;
- C: Therefore, evolution is not a successful explanation.
We can agree that P1 seems reasonable, but the problems lie with P2. It may be that evolutionary theory can provide an explanation, but that this is insufficiently understood by the person making the argument and hence thought to be unsuccessful. However, even if we suppose for the purpose of discussion that P2 does hold, the conclusion still need not follow. What we require is an additional premise, to the effect that evolutionary theory currently cannot provide an explanation and, moreover, that we have good reason to believe that it never will be able to.
Here we arrive at the crux of the matter: even if evolutionary theory cannot help us at the present time, it may be that tomorrow, next week or in several years with more research and study that the hoped-for explanation can be found. That we are ignorant of such an explanation now is no reason to suppose that we always will. In the syllogism, then, we might have:
- P1: A successful explanation of life must be able to account for the development of life itself;
- P2: Evolutionary theory currently cannot do so;
- C1: Therefore, evolutionary theory can never do so.
- C2: Therefore, evolution is not a successful explanation.
Viewed like this, we can readily see that C1 does not follow from P2. We would require another premise, such as:
- P3: There are strong reasons to suppose that evolutionary theory can never do so.
This, of course, is just the kind of premise that would be disputed and it would require a good argument of its own. The argument, without this expansion to understand what is going on, relies on current ignorance to justify a conclusion about the future.
Post hoc, ergo propter hoc
This Latin term means "after this, therefore because of this" and the fallacy involves mistaking a subsequent event for a consequent event. For example:
There are plenty of other sporting superstitions like this one we could look at. Although one concern here is that if the lucky hat didn't "work" we might attribute the run of losses to something else, the main issue runs thus: after I found my lucky hat the losing streak stopped; therefore, it was because of it that the team started doing well again—post hoc, ergo propter hoc. We have two subsequent events—the finding of the hat and the ending of the losing streak—that are assumed to be consequent, the former causing the latter. There are plenty of other ways to account for events, though: perhaps the team was missing several key players, or playing away from home? The objection is to note that it need not follow that two subsequent events mean that one caused the other.I lost my lucky hat and my team started a losing streak. When I found it again their fortunes improved. It just goes to show that my lucky hat works after all.
Take another example:
The argument here is that people are motivated to migrate to one country rather than another because of the assistance it can provide them with; the fact that the number of immigrants went up after the amount was increased is supposed to prove this theory. If we set it out clearly, we can see what is going on:The government increased the amount of benefits it provides and the level of immigration went up. This proves that people come here for the free hand-outs.
- P1: Benefit levels went up;
- P2: Immigration levels then increased;
- C: Therefore, immigrants chose which country to migrate to on the basis of benefit levels.
In fact, we would expect the matter to be far more complex, with potential migrants—both those who chose to leave their home country and those who are forced to by circumstances—to weigh up many factors. What is missing, then, is another premise—something like:
- P3: All other factors remained the same.
If we take P1 and P2 as given, P3 still requires a strong argument of its own, especially since—on first inspection—it's hard to see how such dynamic factors could remain constant long enough to make this assessment.
In general, the picture we have is as follows:
- P: B follows A;
- C: Therefore, A caused B.
We could replace A and B with all manner of instances to see how plainly this argument fails; we would need that crucial additional premise that all other factors remained the same if we want to talk about causation. Since it assumes too much, this fallacy is usually called one of presumption.
This fallacy typically involves asking a question and providing only two possible answers when there are actually far more. It seems to be a favourite of politicians, especially when trying to win support for a none-too-plausible policy. Take this classic example:
You're either with us or against us.
The implicit argument here is that two possible positions exist with regard to the matter at hand: in favour or opposed. If we are not in favour, then, it follows that we must be opposed; and vice versa. The use of such tactics often give us the opportunity of appreciating fine—if overblown—rhetoric, too, like "do you support this war to defend our way of life or are you a cowardly, treasonous blackguard?" To expose the question as a false dilemma, all we need do is show that an alternative response exits. Other names for the same thing are the black and white fallacy, which immediately calls our attention to the shades of grey that are ignored, or the bifurcation fallacy.
Take another example:
Either you support lowering taxes or you're content to see this country go to hell in short order.
The person presenting such a choice presumably advocates the lowering of taxes and is offering us a choice of two options. Since the second one seems unpalatable, he or she assumes we will lend our support to the policy. Taking the best possible reading of this situation, we might have the following:
- P1: We can lower taxes or the country can go to the devil;
- P2: No other options exist;
- C: Therefore, a person not agreeing with lowering taxes is content to see the country fall apart.
Even this does not precisely address the statement as given; for instance, we could hold no opinion at all on the matter, or be insufficiently informed to do so sensibly. These are alternatives, so the choice given is a false dilemma. In the above formulation we could challenge P2, since it seems unlikely that only one policy has been proposed. A single alternative would again make the choice a false dilemma. As before, this is a fallacy of presumption.
This fallacy occurs when a person is too quick with what they suppose to follow from various stages in their argument. Take this example:
If we accept restrictions on free speech then opponents of freedom will soon be asking for more restrictions elsewhere and before we know it we'll be living under a totalitarian regime.
The slippery slope is supposed to run from the acceptance of restrictions on free speech to the arrival of a totalitarian regime, so that once we start on this road there is (allegedly) no turning back—totalitarianism would be inevitable.
To check if the argument is fallacious we need to look at the initial premise and the conclusion and see if the latter follows. In our example this would give:
- P: Freedom of speech is to be restricted;
- C: Therefore, totalitarianism is inevitable.
Put so starkly, it doesn't seem very convincing. Moreover, it is by no means obvious that the premise need lead to anything other than what it states; to show otherwise, the person making the argument would need to add more detail in the form of additional premises, explaining why the conclusion necessarily follows. Without that, the fallacy lies in claiming that a slippery slope exists where it doesn't.
This fallacy occurs when two or more questions are asked at the same time as though they are related, when in fact they need not be. For example:
Do you agree that we should lower taxes and increase prosperity?
Here we are asked two questions ("do you agree that we should lower taxes?" and "do you agree that we should increase prosperity?"), but they are linked together as though reducing taxes and increasing prosperity are the same thing. Sometimes, of course, that is the point: the questioner wants to say that lowering taxes will lead to increased prosperity, so the question is actually asking if we agree that one follows the other. Instead, we can separate the two and perhaps agree with one and not the other. For instance, we might want to increase prosperity but disagree that lowering taxes is the way to go about it.
Often the rhetorical purpose of a complex question is to associate a proposed course of action that might be rejected with a desirable consequence, suggesting that the latter depends on the former. This challenges the reader/listener to reject both, which would be hard to do without accepting the loss of the desirable part. The way around this strategy is to separate them. Take another example:
Do you want to study philosophy and waste your time?
There are again two questions being asked here: "do you want to study philosophy?" and "do you want to waste your time?" The implication we are supposed to draw is that studying philosophy is a waste of time, but we can ask if it is possible to answer "yes" to one question and "no" to the other. In this case, we can: we might think that studying philosophy is not a waste of time, but agree that wasting time is something to be avoided. In that case, we can give the "yes" and "no" answers and hence we have a fallacy of complex question.
In general, then, a complex question involves being asked something in the form "do you believe/agree with/disagree with A and B (and C, etc...)?" and realizing that the question can be separated into "do you agree with A?"; "do you agree with B?"; and so on. If A and B are related, then there may be no fallacy; but if it is possible to answer the separate questions with different answers, then a complex question has been used fallaciously.
The fallacy of accident is sometimes also called a sweeping generalization and this latter name for it gives an indication of what is going on. It occurs when a general rule is misapplied to a particular situation. Take an example:
The Bible says, "thou shalt not kill", but every time you eat you're killing something.
Here the argument is intended to show that the Biblical injunction is mistaken, since killing is unavoidable if we hope to survive. To untangle it and find where the error lies, we look for the general rule and try to see if it has been correctly applied or not. In this case, the rule is easy to spot: "thou shalt not kill". Next we need to ask where (or to whom) the rule is supposed to apply, and here we find the error: it is clear from the context that the rule is for humans and prevents them from killing other humans. Since it's possible to survive without needing to kill other people (although much of world history tends to suggest otherwise), to extend the rule to animals or plants, say, is to misapply it—to make a sweeping generalization that goes far beyond the original intent in an effort to defeat it.
If we fill in the implicit suggestion and put the argument into a syllogism it immediately becomes clear:
- P1: Thou shalt not kill (other humans);
- P2: We need to kill other animals and/or plants to survive;
- C: Therefore, following the rule "thou shalt not kill" would prevent our survival.
The conclusion simply does not follow.
Consider now another example:
You say that we should not kill others, but that means you wouldn't raise a hand while someone tried to murder you.
Here the person is taking the same general rule and applying it to the particular situation in which (it is implied) we must "kill or be killed". Thus we have the same rule and the application seems to be reasonable, but this time the sweeping generalization lies in supposing the rule "thou shalt not kill" to read something like "thou shalt never kill, under any circumstances". By taking an uncharitable reading of the principle, the person has over-generalized the rule and applied to areas not included in its original formation.
In summary, the fallacy of accident usually involves trying to disprove a generalization by finding a particular example to the rule and assuming that the rule was supposed to apply universally. It occurs when we move too quickly from the general to the specific.
This fallacy is often called the converse accident because it is the opposite to the fallacy of accident above; that is, it involves moving too quickly from the specific to the general. For example:
Some murders are committed by men, so if we locked away all males there would be no more murders.
If we replace "murder" by any other social ill and "men" by a minority group, we can see that we have the kind of argument that has historically been used to justify organized or individual violence against them. The fallacy lies in making a general rule of a few particular cases, hence the hasty generalization. In this case, we need only find a single counter-example to show that the general claim is false, such as a murder by a female.
Another example could be as follows:
My friend lied to me, so it just goes to show that you can never trust anyone.
As before, the single specific instance of a friend lying has been used to justify a general rule that all friends (or indeed anyone at all) are liars. One or more friends who are not liars would serve as counter-examples to defeat the claim. To avoid the hasty generalization we have to be careful not to come up with a general rule from too few particular cases.
This is not an obscure delicacy but a fallacy that involves bringing irrelevant ideas to a discussion as though they can add to it. For example:
You say that prisons are ineffective, but what about those who thought the streets were safe for them now? How will they feel when they see the person who robbed them going unpunished?
Even though we could say that by suggesting that prisons are currently ineffective we are not saying that they should just be closed down and everyone inside let out (that would be another fallacy—a straw man), the point is that none of this is relevant to the issue at hand: if prisons do not work as they are, then that is so whether or not we have in mind some improvements, a better idea or are just making a criticism of an imperfect system. By introducing this objection, attention is drawn away from the prison question and onto something entirely different.
In general, if a claim about A is countered by referring to B, the important question is to ask whether B is relevant to A. If so, it may be an objection worth considering; if not, the objection is a red herring.
This fallacy takes its name from the image of someone stuffing some clothes with straw and then beating seven bells out of the resultant opponent, supposing thereby that they have somehow won a fight. The fallacy occurs when an argument is countered by taking a weaker form of it and showing where it fails, assuming that this means the original argument has also been defeated.
Take an example:
You say we should invest more in public health services, but taking everyone's money off them and deciding what they should spend it on for them is nothing less than totalitarianism.
We could render this as a syllogism as follows:
- P1: Investing more in public services is equivalent to taking everyone's money and deciding how it should be spent for them;
- P2: This is equivalent to totalitarianism;
- P3: Totalitarianism has been refuted previously;
- C: Therefore, the idea of investing more in public services is refuted.
Even if we accept P2 and P3, which we needn't, the important point is that P1 is false and does not accurately describe what was originally claimed. By making two different ideas equivalent the argument becomes easier to address but, since the refutation deals with one idea and the argument with another, nothing is actually accomplished. The argument is mischaracterized or misrepresented in order to make it easier to tackle, but by doing so it isn't tackled at all.
Another example could be this:
You advocate the death penalty but I doubt that anyone will accept televised hanging of people on meat hooks.
Here the idea of what the death penalty involves is mischaracterized (we would hope) by supposing that anyone advocating it is actually asking that people be publicly hung on meat hooks. Since (again, we would hope) this measure would not be accepted, the argument is considered defeated. A simplistic and deliberately repugnant version of the death penalty is used to discredit the idea when the person suggesting it probably said nothing of the sort; as a result, the refutation is unsuccessful.
This fallacy is unfortunately very common and some politicians tend to be adept at its use. It can be used in humour but perhaps the most important lesson to learn from it is not to unwittingly or otherwise make straw men of other people's ideas ourselves.
The fallacy of equivocation occurs when an important term in an argument is used in two (or sometimes more) senses. An example might be:
Why is it okay to kill time but not to kill people?
Here the word "kill" is being used in two different ways: the first time it is employed as a figure of speech, where "killing time" means to use up some spare moments in one way or another; in the second it takes on a more specific meaning, the kind we normally associate with it. The person asking the question has confused these, so that something else we could ask with the word would mean different things depending on which sense we adopted. For instance, we could inquire, "how did you kill time?" and "how did you kill the person?" The first would give us a reply that describes an action and could be all manner of things; the second, though, would have to specifically be about the way in which someone was murdered. Asking the question, then, shows a misunderstanding in the use of the word.
In general, we can tell if someone has equivocated by finding a term used in two or more contexts, such that its meaning in one is different than in the other(s). Take another instance:
My school is supposed to provide free tuition but I've seen restrictions in the lessons I've attended.
This time the word "free" has been implicitly equivocated, with it meaning "free of charge" in the first instance but "free of restrictions" in the second, resulting in a confused argument. If we set it out again, this time removing the problematic term and replacing it with synonyms, we might get the following:
- P1: Tuition at my school does not cost students any money;
- P2: There are restrictions on course content, etc;
- C: Therefore, the tuition does cost money after all.
The conclusion does not follow and the error is plain to see. Rewriting an argument in this way is sometimes the best way to note (or to demonstrate) that an equivocation has occurred.
Affirming the consequent
This is a fallacy we looked at in our sixth discussion, an example of which might be:
If it is raining then I would get wet; I am wet, so it must be raining.
The problem here is that there is an implicit assumption that the only way to have gotten wet is via the rain, when instead we could think of many other possibilities. For instance, suppose I had fallen into a swimming pool on a sunny day and, in order to give the impression that I was not embarrassed at all, I decided to start musing philosophically by making the above claim. We can immediately see that there in another reason for being wet, so the argument fails.
The general form taken by affirming the consequent is as follows:
- P1: If A then B;
- P2: B;
- C: Therefore, A.
This fails because, as with the example, we might have another possibility:
- P1: If A then B;
- P2: If C then B;
- P3: B;
- C: Therefore, A.
The fact that we have B fails to tell us if we should suppose that we have A or C also, so we cannot make the decision either way on the basis of the information available. There could be more than two possibilities, of course. When someone makes an argument that seems to suffer from affirming the consequent (assuming they are not doing so deliberately) they are assuming an extra step, namely that there is only one possibility:
- P1: If A then B;
- P2: Only A can cause B;
- P3: B;
- C: Therefore, A.
Unless P2 is sound, though, the fallacy of affirming the consequent has occurred. A typical example from politics might be someone taking the credit for some positive news:
Since my policies were implemented, unemployment has gone down; therefore, my policies were a good idea.
The apparent claim here is that the policies were responsible for the lowering of unemployment, so we have:
- P1: If my policies are an effective measure for tackling unemployment, unemployment should go down;
- P2: Unemployment went down;
- C: Therefore, my policies were effective.
As we know from experience, however, there are many factors at work in the economy and there could be several possible reasons for the change in employment figures; but a quick-thinking politician can perhaps hope that we are not paying attention and use the fallacy of affirming the consequent to take the plaudits.
The opposite to this fallacy is affirming the antecedent and is a sound argument. This takes the form:
- P1: If A then B;
- P2: A;
- C: Therefore, B.
In the context of our example, this would be like saying "if my policies are effective, unemployment will come down. My policies are effective, so they will lead to a lowering of unemployment." In Latin, it is known as modus ponens.
Denying the antecedent
This fallacy looks similar to affirming the consequent. An example might be:
All tomatoes are red; but that isn't a tomato so it can't be red.
The error here is immediate: the "thing" under discussion could be anything at all and is perhaps red; the fact that it isn't a tomato doesn't tell us anything about its colour, but only about one thing that it cannot be. We have:
- P1: All tomatoes are red;
- P2: This isn't a tomato;
- C: Therefore, it isn't red.
The item being considered could be a UK postbox, say: the premises would both be true but the conclusion would be false. That suggests we have a formal fallacy. In general:
- P1: If A then B;
- P2: Not A;
- C: Therefore, not B.
To use the political example above again, we could have another instance of the same thing:
If my policies are effective then unemployment will go down; but my policies are not effective, so unemployment won't go down.
As we discussed, there could be several other reasons why unemployment does go down in spite of the bad policies, so the argument fails and is an example of denying the antecedent.
The opposite to this is denying the consequent, a sound argument that takes the form:
- P1: If A then B;
- P2: Not B;
- C: Therefore, not A.
For our example, this would give us something like "my policies will lead to a lowering of unemployment, but unemployment didn't go down so my policies were not effective." In Latin this is called modus tollens.
Begging the question
Sometimes people use the conclusion of their argument to prove it, whether accidentally or not. For example:
Theft is illegal because if it wasn't then it wouldn't be against the law.
This is called begging the question, or assuming what is to be proven in order to prove it. In Latin the fallacy is known as petitio principii. For this example, the question we could suppose was asked might be "why is theft illegal?" The person inquiring could be wondering why it is wrong to steal a loaf of bread to feed him- or herself, for instance. The reply states that theft is against the law, and hence illegal, which amounts to saying, "it's against the law because it's against the law"; so the conclusion (that theft is illegal) is used to answer the question ("why is theft illegal?").
Another example could be:
I know my friend is reliable because I trust him.
Here, once again, the conclusion (that my friend is reliable) is assumed beforehand (I trust him). There is no attempt to show why my friend is reliable, other than—ultimately—to say that he is reliable, so we end up with "my friend is reliable because he is reliable". In general, if we can recast an argument in the form "A is so because A is so" then we have reasoning that goes around in a circle and hence begs the question.
Unfortunately the phrase "begging the question" is frequently misused, particularly to mean "but this raises the question that..." This is something to be aware of and hopefully avoid.
The fallacy of composition occurs when the whole is assumed to have the same qualities as a part. For example:
My favourite team has bought great players, so we will win the league next year.
As many sports fans know, a team full of world class players does not make a world class team; often they simply cannot play together, or don't get along. The mistake lies in supposing that the qualities of the individual players will be carried over to the team composed of them. Another example could be:
We cannot drink hydrogen or oxygen, so we cannot drink a combination of them.
As we all know, we can drink water and so this argument fails. It does so because it assumes that a quality shared by the two separate elements will be retained by their composition. Sometimes it happens that such qualities are carried over when a collection of individual facts is made into a group (for instance, individual racehorse owners typically have more horses than non-racehorse owners and we might expect the total number of horses owned to be higher for the grouping of the former than the latter), but there needs to be a convincing reason why the step can be made. Without justification we find the fallacy of composition.
To conclude, there are many pitfalls to be on the lookout for when reading, writing or discussing philosophy, politics and other subjects. As we learn to recognise them and realise that they share a structure or form we can understand, however, they become easier to notice and address.