Sometimes you find yourself needing to derive a conditional. When this is the case, it is convenient to use a technique called a "conditional proof" (CP). CP allows you derive a conditional (hence the name) that you need in a proof, either as the conclusion or as an intermediate step. This technique allows one to assume a proposition, then derive something from it (and any other available propositions). The assumed proposition and the resulting proposition are then linked together in a conditional statement. It is usually possible to derive the resulting conditional could have been arrived at in some other way, but a CP is often more convenient.
The theory is simple. If premise (P) is assumed and from premise (P) together with given premises and the application of inference and equivalence rules, another proposition (Q) can be shown to be derivable from that Assumed Premise (P), then the conditional 'if P then Q' is demonstrated. The conditional proof must be bracketed from the assumed premise to the conclusion with the last line outside the bracket always a material implication. In a conditional proof only the final line beyond the conditional proof is proven. The final line must have the horse shoe as the dominant operator. Schematically, here is the form of a conditional proof:
|q.||Φ → Ψ||2-n CP|
CP often makes proof construction much more straightforward than if you limit yourself to direct proofs. For instance, you may have the conditional 'p É (q · r)' as a line in your proof, but need the conditional 'p É q'. Since 'p É q' does follow from 'p É (q · r)' (as a truth table would show) you could construct a direct proof for 'p É q'. Such a proof, however, is rather tricky as it involves Distribution:
(~p v r) 2 Dist
~p v q
proof (CP) offers a simpler and more direct route to establishing the desired
conditional. When using CP, begin
by ASSUMING the antecedent of the conditional that you want, in this case,
'P'. Then, using the standard
rules, derive the consequent. You
then discharge the assumption (close it off) by deriving the desired
conditional and justifying it by a sequence of lines beginning with the
assumed antecedent and ending with the consequent.
Here is how it works:
→ (q ·
r 1,2 MP
q 2-4 CP
The vertical line beginning at line 2 and ending in a horizontal line at line 4 marks the scope of the assumption. Lines 2-4 serve to justify line 5, but they cannot be used in any subsequent line of the proof, they are closed off from the rest of the proof, but you are free to use line 5 as you need it. To be sure, the CP proof takes just as many lines as does the direct proof in this case, but the CP proof does not require two steps of Imp and one step of Dist.
Many people are initially uncomfortable with CP, they feel like it is some sort of trick. It is not. When you make an assumption in a CP you are in effect asking "What would be the case IF this assumption were true?" Well, a certain conclusion would follow because we can derive it from the assumption plus our premises. That is, IF the assumption is true, THEN something else follows. But that is all that the conditional says: If P then Q.
When using conditional proof, you are allowed to make multiple assumptions. You may both nest your assumptions, that is, make one assumption within the scope of another assumption, and you can make a second assumption after having discharged your first assumption.
For example, nesting assumptions makes it rather easy to show that F ∴ D → (E → F) is a valid argument.
1. F pr ∴ D → (E → F)
┌→2. D AP
| ┌→ 3. E AP
| | 4. E ● F 3,1 conj
| | 5. F 4 simp
| 6. (E → F) 3-5 CP
7. D → (E → F) 2-6 CP Click here to see more examples of nested and multiple assumptions
We can use CP to show that some other rules are valid. For instance, we can use CP as a strategy to show that Transposition is a legitimate equivalence. Here's how we would do it (without relying on Transposition, of course, since that is the thing to be proved):
1. p → q ∴~q É ~p
┌→2. ~q AP
| 3. ~p 1,2 MT
4. ~q → ~p 2-3 CP
Return to Tutorials Index Go on to Indirect Proof