| A History of Numbers
| Propositional Logic
| Logical Completeness
| The Liar's Paradox
Logical Consistency | Basic Methods of Mathematical Proof | Integers and Natural Numbers
Rational Numbers | Irrational Numbers | Imaginary Numbers | The Euler Equation
Some things make sense. Others do not. Some things that do make sense make more sense than other things that make sense, and vice versa. The study of logic is devoted to finding out which things make the most sense to the most people and putting those things on paper. In 300 BC, the Stoics of Ancient Greece1 developed a pretty sensible system of formal logic.
It sounds obvious but the key concept to propositional logic is the proposition. Simply put, a proposition is something that is exclusively either true or false. As creatures with limited mental capacity and language ability, we often represent propositions by sentences in human language: For example, 'All men are mortal', 'Some men are women', 'Man' is not a gender specific term'. However, not all statements expressible in language represent propositions. They may contain ambiguous terms or contradictions eg 'Logic is fun' or 'This sentence is false'. Some sentences aren't even statements at all eg 'What is the meaning of life?' or 'Stop using stupid example sentences'. Such things are not useful as examples of propositions.
Much oxygen and many calories have been spent arguing over the strict definitions of true and false, but when it comes down to the bone, it's simply difficult to define something with strict terms when the very thing you're defining is supposed to be the basis of all strict terms. True propositions are meant to represent things of actuality, whereas false propositions are propositions that aren't true. Logic isn't really involved with the knowledge of precisely which propositions are true, but with the relationships between the truth-values of propositions. The actual content of a proposition usually doesn't matter to a logician. In fact, most logicians would rather not know whether a given proposition is true. So the beginnings of formal symbolic logic start with throwing away the messy natural languages and just giving all the propositions letters. Logicians like 'P', 'Q' and 'R' are also favourite choices.
A man named George Boole is generally given credit for designing what is now called 'symbolic logic' way back in the 19th Century. Symbolic logic is a nifty way to communicate the concepts of propositional logic without having to use a natural language. The basis of symbolic logic is writing down what's true. Writing down what's not true is the realm of scratch paper and margins. When we want to say that the proposition represented by 'P' is true, we simply write 'P'. The act of writing down 'P' is often referred to as saying that 'P' is a theorem of your logic. If we want to say that the proposition 'P' is not true, we use a symbol of some sort, which is often the tilde, '~'. Some people like using minus signs, '-', or exclamation points, '!'. The proposition named '~P' is called 'the negation of P', and it's defined along these lines: if 'P' is not true, then '~P' is true.
This is all you really need to have a complete logical system. Other logical systems can and have been created off this basis. New symbols can be introduced, provided they come with a strict method for determining the truth-values of phrases using these symbols. This will assure that you have a well-formed logical system. But well formed isn't good enough. A consistent system makes much more sense. By 'consistent' we mean 'contains no contradictions'. In symbolic logic terms, this means that you shouldn't be able to write down 'p' and '~p' as theorems on the same piece of paper where the lowercase 'p' can be any proposition at all, even if it's composed of other propositions. That might sound complicated, but it really does make sense. Either 2 + 2 = 5 or it doesn't. The presence of both truth and falsity is not an option for a proposition. We designed them to work that way. There are a standard bunch of binary symbols used in propositional logic, and they correspond to other sensible facets of human thought. Humans often like to talk about more than one thing as being true at the same time, so we've got a symbol for that too. The symbol is often an upside-down 'v', '^' although some systems use an ampersand, '&'. All these different symbols are intentionally designed to confuse people and make them think that logic is something fancy and only fit for philosophers and mathematicians, but it isn't complicated at all. 'p^q' is true only when the propositions represented by both 'p' and 'q' are true. It's false at all other times. Another commonly used symbol is the opposite of the conjunction we just described. The disjunction is usually symbolised by a normal 'v' and roughly translates into English as 'or'. 'pvq' is true when either 'p', 'q', or both are true. It's only false when both 'p' and 'q' fail to be true.
The biconditional is another simply defined symbol which is supposed to stand for the concept of equivalency, that two propositions basically mean the same thing. Usually, this is represented by three horizontal lines or by a two-directional arrow, '<->'. When both 'p' and 'q' are true, 'p<->q' is true. Also, if both 'p' and 'q' are false, 'p<->q' is true. If 'p' and 'q' have different truth-values, then 'p<->q' is not true. Still makes sense, right?
Well here's where it gets a little stickier. One of the most useful concepts in rational thought is that of implication. We say that proposition 'P' implies proposition 'Q'. This is most often expressed in English via the 'if... then' construction, sometimes without the 'if'. Unfortunately, there are a lot of other things we express using the same construction. The counterfactual implication represented by the sentence 'If I were born in the USSR, I would be a communist.' deals with a very different concept than the conditional implication 'If you clean your room, I'll take you to the movies.' Standard propositional logic deals with this second type of conditional and is usually symbolised by a sideways horseshoe symbol or an arrow ('->'). So when is 'p->q' true? It's pretty easy to see using the example above with 'p' as 'You clean your room' and 'q' as 'I'll take you the movies.' If you clean your room, and I take you to the movies, then clearly the statement is true. So it's true when both 'p' and 'q' are true. It's obviously a false statement when 'p' is true and 'q' is false (if you clean your room and I don't take you to the movies). But if you don't clean your room and I take you to the movies, I might be a sucker, but I certainly haven't contradicted that statement. The same goes for the case when both propositions are false. So the only time 'p->q' is false is when 'p' is true and 'q' is false2.
So this is the basic set-up of propositional logic. Working with and proving things with propositional logic is the realm of natural deduction proofs. And this is by far the least complicated of the logics. All sorts of quantifier and modal logic systems exist which better mimic the full complexity with which we make sense. Some of these make less sense to certain people, but in general, we can all rely upon the sensibility of propositional logic.