Demonstrating Invalidity with Formal Logic

A lot of people were introduced to formal logic with propositional logic and are familiar with truth tables. One of the wonderful aspects of truth tables is that they are both validity and invalidity tests. But in the world of predicate logic, we cannot construct truth tables. Natural deduction proof is our only validity test. Successful construction of a proof guarantees that the argument is valid, but an inability to find a proof does not establish an argument’s invalidity. So, we’ll need a different way of demonstrating invalidity. We will formulate two different ways of demonstrating that an argument in first-order predicate logic is invalid: the method of counterexample and the method of expansion. Both will show that an argument is invalid by introducing semantic elements into our purely syntactic approach.

 

SEMANTICS, SYNTAX, AND PRAGMATICS

There are three levels to language: syntax, semantics, and pragmatics. Syntax deals with form and structure. Questions of grammar are syntactic, as are word order and internal structure.

Semantics looks at meaning. Questions about what a word refers to is a semantic concern. Semantic issues are important issues that are complex and philosophically interesting.

Syntax looks at what we say, semantics looks at what it means, and pragmatics deals with how we say it and how that can change the meaning. Issues of tone or inference from that which is unsaid are pragmatic aspects.

Our logical languages are thin because they are purely syntactic languages. In a deep sense, they are languages with no meaning. When we translate spoken language arguments into truth-functional or first-order predicate logic, we remove the semantic content.

Why are we ignoring semantics with our logical languages if it is important? Because our interest is determining deductive validity, which is a purely syntactic concept.

Whether a deductive argument is valid has nothing to do with what the argument is about. It only matters whether the conclusion follows from the premises. As a result, our artificial languages remove the flesh from the spoken language arguments and rigorously display the inner skeleton, because that is sufficient to answer our questions about validity.

But in demonstrating invalidity, we are going to restore semantic elements. The idea is that, in one sense, these semantic aspects are irrelevant. Consider the invalid argument affirming the consequent: If a, then b; b, therefore a.

This is a awed argument form, regardless of what we’ll in for a and b. We could say: “If you are a dog, you have a nose. You have a nose. Therefore, you are a dog.” Or we could say, “If you won the lottery, you would be happy. You are happy. Therefore, you won the lottery.”

Whether you are a dog or have won the lottery is irrelevant here. The content does not matter because it is the underlying form of the arguments that is to blame. But the aw in the structure becomes clearer to us when we add some semantic content to that structure.

We are not declaring individual arguments to be invalid, but argument forms. When we add semantic content to the argument forms, we are not merely saying that just the filled-in version of the argument is invalid; rather, we are saying that the skeleton itself is awed. We are saying that any argument with that same form is invalid because we are showing that it is possible to construct a bad argument with that form.

 

USING COUNTEREXAMPLES

An argument is invalid if it is possible for its premises to be true while its conclusion is false. In truth-functional logic, we could construct a truth table in which every possible combination of truth-values for the constituent sentences is considered.

But another way is to simply come up with an example of an argument that has the same form but has true premises and a false conclusion. This is called the method of counterexample, and because we don’t have the ability to construct truth tables for first-order predicate logic, we will have to use it.

In order to give semantic content to an argument form in first- order predicate logic, there are three things we need to do.

  1. We need to give meaning to the quantifiers. (When we say “all x” or “some x,” all what’s or some what’s are we talking about?)
  2. We need to say which specific elements of this universe are picked out by our constants.
  3. We need to give meaning to the properties.

These combine to give what logicians call an interpretation. An interpreted argument has the semantic flesh put back on the syntactic skeleton, and thus we can talk about the truth or falsity of the interpreted sentences.

The goal is to generate an interpretation of our argument that results in the premises all being true and the conclusion false. This will conclusively show that the argument form is invalid.

If we want to construct an interpretation, we need first to give meaning to the quantifier. We do this by specifying what is called a domain of discourse, or a universe.

When we use a universal quantifier to say “all y,” what is the “all” we are talking about? It can be any set. It could be the set of all politicians or the set of all brown things. It could be the set of all things whatsoever—what is called an unrestricted domain.

Often, we will pick sets of numbers to be our domain of discourse because their properties are well defined and well behaved.

Next, we assign a specific member of the universe to each constant. If the domain of discourse is the set of positive integers, each constant becomes a particular number. If the domain is brown things, then each constant becomes a specific brown thing—not, for example, mud, but rather a particular mud stain on a particular shirt.

Each constant gets mapped onto a particular member of the universe, but two different constants can get mapped onto the same thing. People have multiple names. Some might call you by your name, while others might call you “mother” or “father,” for example. The same can be true for constants.

Finally, we take each property and assign it a meaning by selecting a subset of our domain of discourse. This can be done in two ways. We can simply choose members and put them together to form a subset. The property is then being a member of that subset. Or—and this is the more common way—we specify an actual property.

Note that the empty set, the set with no members, is a subset of every set. That means that it is acceptable to have empty categories. Indeed, sometimes it will be quite advantageous. For example, we can use “being a unicorn” as a property.

Once we have specified a domain of discourse, chosen a member for each constant, and assigned subsets to our properties, we can then translate our formal propositions into spoken language and determine if the premises of our argument are true and if our conclusion is false on this interpretation. If so, we have found a counterexample, and our argument is invalid.

 

USING THE METHOD OF EXPANSION

In addition to the method of counterexample, the other invalidity test is the method of expansion. Coming up with counterexamples requires that we fully endow our argument with semantic content. Expansions only require some semantic content, but not full meaning.

We do this by starting with a small universe that contains only two things: a and b, for example. We then see if we can make the premises true and the conclusion false for this small universe.

We do this by eliminating the quantifiers, reducing the problem to truth-functional logic. If there are only two things, then our universal quantifier becomes a conjunction.

The sentence ∀xFx says that everything is F, but everything in our small universe is just a and b. So, ∀xFx is equivalent to Fa&Fb. Similarly, ∃xFx says that something is F. That thing has to be either a, b, or both, because that is all there is. So, ∃xFx is equivalent to FaFb. This holds for any sentence in our first- order predicate language.

Consider the sentence ∀x[(DxNx) → Jx]. We see that we have a universal quantifier, so we start by writing down a conjunction. On the left side, we replace every instance of the quantified variable—in this case, x—with a. On the right side, we replace every instance of x with b. This gives us [(DaNa) → Ja] &[(DbNb) → Jb].

Suppose that we have a complex existential quantification— for example, ∃y[(Gy&−Fy) ∨(−Gy&Fy)]. It is an existential quantification, so we start by writing down a disjunction, and on the left side, we replace every instance of the quantified variable—in this case, y—with a. On the right side, we replace every instance of y with b. This gives us the following:

[(Ga &−Fa) ∨ (−Ga &Fa)] ∨ [(Gb &−Fb) ∨ (−Gb &Fb)]

But what if it is not a quantification, but a truth-functional combination of quantified propositions? We can handle that, too. Recall that the scope of a quantifier ends at a truth-functional connective that is not shielded within parentheses. So, what we do is keep all of the truth-functional connectives in place and individually expand the quantifications.

How is this an invalidity test? When we expand the sentences, the quantifiers are gone. All we have are sentences that can be true or false and truth-functional connectives.

We have successfully reduced a first-order predicate question to a truth-functional question, and we know how to answer the truth-functional question of determining whether an argument is invalid.

We could construct a truth table and find a row in which all the premises are true and the conclusion is false. It turns out that the truth tables will quickly become unwieldy and that the preferred approach is to plug truth-values into the atomic sentence to see if we could arrange them to give us true premises and a false conclusion.