Talk:Logical graph

MyWikiBiz, Author Your Legacy — Saturday November 23, 2024
Revision as of 14:00, 11 December 2008 by Jon Awbrey (talk | contribs) (→‎Bridges And Barriers: + section + notes)
Jump to navigationJump to search

Notes & Queries


\(\cdots\)

Place For Discussion


\(\cdots\)

Logical Equivalence Problem

Problem

Problem posted by Mike1234 on the Discrete Math List at the Math Forum.

  • Required to show that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p.\)

Solution

Solution posted by Jon Awbrey, using the calculus of logical graphs.

In logical graphs, the required equivalence looks like this:

      q o   o p           q o
        |   |               |
      p o   o q             o   o p
         \ /                |   |
          o               p o   o--o q
          |                  \ / 
          @         =         @

We have a theorem that says:

        y o                xy o
          |                   |
        x @        =        x @

See Logical Graph : C2. Generation Theorem.

Applying this twice to the left hand side of the required equation, we get:

      q o   o p          pq o   o pq
        |   |               |   |
      p o   o q           p o   o q
         \ /                 \ /
          o                   o
          |                   |
          @         =         @

By collection, the reverse of distribution, we get:

          p   q
          o   o
       pq  \ / 
        o   o
         \ /
          @

But this is the same result that we get from one application of double negation to the right hand side of the required equation.

QED

Discussion

Back to the initial problem:

  • Show that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p.\)

We can translate this into logical graphs by supposing that we have to express everything in terms of negation and conjunction, using parentheses for negation and simple concatenation for conjunction. In this way of assigning logical meaning to graphical forms — for historical reasons called the "existential interpretation" of logical graphs — basic logical forms are given the following expressions:

The constant \(\operatorname{true}\) is written as a null character or a space.

This corresponds to an unlabeled terminal node in a logical graph. When we are thinking of it by itself, we draw it as a rooted node:

          @

The constant \(\operatorname{false}\) is written as an empty parenthesis\[(~).\]

This corresponds to an unlabeled terminal edge in a logical graph. When we are thinking of it by itself, we draw it as a rooted edge:

          o
          |
          @

The negation \(\lnot x\) is written \((x).\!\)

This corresponds to the logical graph:

          x
          o
          |
          @

The conjunction \(x \land y\) is written \(x y.\!\)

This corresponds to the logical graph:

         x y
          @

The conjunction \(x \land y \land z\) is written \(x y z.\!\)

This corresponds to the logical graph:

        x y z
          @

And so on.

The disjunction \(x \lor y\) is written \(((x)(y)).\!\)

This corresponds to the logical graph:

        x   y
        o   o
         \ /
          o
          |
          @

The disjunction \(x \lor y \lor z\) is written \(((x)(y)(z)).\!\)

This corresponds to the logical graph:

        x y z
        o o o
         \|/
          o
          |
          @

And so on.

The implication \(x \Rightarrow y\) is written \((x (y)).\!\) Reading the latter as "not \(x\!\) without \(y\!\)" helps to recall its implicational sense.

This corresponds to the logical graph:

        y o
          |
        x o
          |
          @

Thus, the equivalence \(x \Leftrightarrow y\) has to be written somewhat inefficiently as a conjunction of two implications\[(x (y)) (y (x)).\!\]

This corresponds to the logical graph:

      y o   o x
        |   |
      x o   o y
         \ /
          @

Putting all the pieces together, showing that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p\) amounts to proving the following equation, expressed in the forms of logical graphs and parse strings, respectively:

      q o   o p           q o
        |   |               |
      p o   o q             o   o p
         \ /                |   |
          o               p o   o--o q
          |                  \ /
          @         =         @

( (p (q)) (q (p)) ) = (p ( (q) )) ((p)(q))

That expresses the proposed equation in the language of logical graphs. To test whether the equation holds we need to use the rest of the formal system that comes with this formal language, namely, a set of axioms taken for granted and a set of inference rules that allow us to derive the consequences of these axioms.

The formal system that we use for logical graphs has just four axioms. These are given here:

Proceeding from these axioms is a handful of very simple theorems that we tend to use over and over in deriving more complex theorems. A sample of these frequently used theorems is given here:

In my experience with a number of different propositional calculi, the logical graph picture is almost always the best way to see why a theorem is true. In the example at hand, most of the work was already done by the time we wrote down the problem in logical graph form. All that remained was to see the application of the generation and double negation theorems to the left and right hand sides of the equation, respectively.

Reflection


\(\cdots\)

Inquiry Into Intuitionism

Notes on a discussion with "Gribskoff" (Manuel S. Lourenço) about his artile on Intuitionistic Logic at PlanetMath.


\(\cdots\)

Bridges And Barriers

Notes on a couple of discussions that I found in the Foundations Of Mathematics Archives (FOMA) about building bridges between classical-apagogical and constructive-intuitionsitic mathematics.

Background --

AM = A Mani
HF = Harvey Friedman
NT = Neil Tennant
SS = Stephen G Simpson
TF = Torkel Franzen
VP = Vaughan Pratt

Feb 1998, Intuitionistic Mathematics and Building Bridges
http://www.cs.nyu.edu/pipermail/fom/1998-February/thread.html#1160
NT: http://www.cs.nyu.edu/pipermail/fom/1998-February/001160.html
TF: http://www.cs.nyu.edu/pipermail/fom/1998-February/001162.html
SS: http://www.cs.nyu.edu/pipermail/fom/1998-February/001246.html
VP: http://www.cs.nyu.edu/pipermail/fom/1998-February/001248.html

Oct 2008, Classical/Constructive Mathematics
http://www.cs.nyu.edu/pipermail/fom/2008-October/thread.html#13127
HF: http://www.cs.nyu.edu/pipermail/fom/2008-October/013127.html
AM: http://www.cs.nyu.edu/pipermail/fom/2008-October/013142.html

Foreground --

Re: Classical/Constructive Mathematics
    Harvey Friedman (15 Oct 2008, 00:36:36 EDT)

HF: There seems to be a resurgence of interest in comparisons between
    classical and constructive (foundations of) mathematics.  This is
    a topic that has been discussed quite a lot previously on the FOM.
    I have been an active participant in prior discussions.

HF: There was a lot of basic information presented earlier, and I think
    that it would be best to restate some of this, so that the discussion
    can go forward with its benefit.

HF: In this message, I would like to focus on some important ways in which
    classical and constructive foundations are alike or closely related.

HF: For many formal systems for fragments of classical mathematics, T,
    there is a corresponding system T' obtained by merely restricting
    the classical logical axioms to constructive logical axioms - where
    the resulting system is readily acceptable as a formal system for
    a "corresponding" fragment of constructive mathematics. Of course,
    there may be good ways of restating the axioms in the classical system,
    which do NOT lead to any reasonable fragment of constructive mathematics
    in this way.

HF: The most well known example of this is PA = Peano Arithmetic.  Suppose
    we formalize PA in the most common way, with the axioms for successor,
    the defining axioms for addition and multiplication, and the axiom
    scheme of induction, with the usual axioms and rules of classical
    logic. Then HA = Heyting Arithmetic, is simply PA with the axioms
    and rules of classical logic weakened to the axioms and rules of
    constructive logic.

HF: Why do we consider HA as being a reasonable constructive system?
    A common answer is simply that a constructivist reads the axioms
    as "true" or "valid".

HF: An apparently closely related fact about HA is purely formal.
    HA possesses a great number of properties that are commonly
    associated with "constructivism".  The early pioneering work
    along these lines is, if I remember correctly, due to S.C. Kleene.
    Members of this list should be able to supply really good references
    for this work, better than I can.  PA possesses NONE of these properties.

HF: RESEARCH PROBLEM: Is there such a thing as a complete list of such
    formal properties? Is there a completeness theorem along these lines?
    I.e., can we state and prove that HA obeys all such (good from the
    constructive viewpoint) properties?

HF: On the other hand, we can formalize PA, equivalently, using the
    *least number principle scheme* instead of the induction scheme.
    If a property holds of n, then that property holds of a least n.
    Then, when we convert to constructive logic, we get a system PA#
    that is equivalent to PA - thus possessing none of these properties!

HF: For many of these T,T' pairs, some very interesting relationships
    obtain between the T and T'. Here are three important ones.

HF: 1.  It can be proved that T is consistent if and only if T'
        is consistent.

HF: 2.  Every A...A sentence, whose matrix has only bounded quantifiers,
        that is provable in T, is already provable in T'.

HF: 3.  More strongly, every A...AE...E sentence, whose matrix has only
        bounded quantifiers, that is provable in T, is already provable
        in T'.

HF: The issue arises as to just where these proofs are carried out - e.g.,
    constructively or classically. This is particularly critical in the
    case of 1. The situation is about as "convincing" as possible:

HF: Specifically, for each of these results, one can use weak quantifier
    free systems K of arithmetic, where constructive and classical amount
    to the same. E.g., for 1, there is a primitive operation in K which,
    provably in K, converts any inconsistency in T to a corresponding
    inconsistency in T'.

HF: Results like 1 point in the direction of there being no difference
    between the "safety" of classical and constructive mathematics.

HF: Results like 2,3 point in the direction of there being no difference
    between the "applicability" of classical and constructive mathematics,
    in many contexts.

HF: CAUTION:  For AEA sentences, PA and HA differ. There are some
    celebrated A...AE...EA...A theorems of PA which are not known
    to be provable in HA. Some examples were discussed previously
    on the FOM.

HF: RESEARCH PROBLEM: Determine, in some readily intelligible terms
    (perhaps classical), necessary and sufficient conditions for
    a sentence of a given form is provable in HA and PA.  Matters
    get delicate when there are several quantifiers and arrows (-->)
    present.

HF: I will continue with this if sufficient responses are generated.

I, too, find myself returning to questions about classical v. constructive logic
lately, partly in connection with Peirce's Law, the Propositions As Types (PAT)
analogy, the question of a PAT analogy for classical propositional calculus,
and the eternal project of integrating functional, relational, and logical
styles of programming as much as possible.

I am still in the phase of chasing down links between the various questions
and I don't have any news or conclusions to offer, but my web searches keep
bringing me back to this old discussion on the FOM list:

http://www.cs.nyu.edu/pipermail/fom/1998-February/thread.html#1160

I find one comment by Vaughan Pratt to be especially e-&/or-pro-vocative:

VP: It has been my impression from having dealt with a lot of lawyers over the
    last twenty years that the logic of the legal profession is rarely Boolean,
    with a few isolated exceptions such as jury verdicts which permit only
    guilty or not guilty, no middle verdict allowed.  Often legal logic
    is not even intuitionistic, with conjunction failing commutativity
    and sometimes even idempotence.  But that aside, excluded middle
    and double negation are the exception rather than the rule.

VP: Lawyers aren't alone in this.  The permitted rules of reasoning
    that go along with whichever scientific method is currently in
    vogue seem to have the same non-Boolean character in general.

VP: The very *thought* of a lawyer or scientist appealing to Peirce's law,
    ((P->Q)->P)->P, to prove a point boggles the mind.  And imagine them
    trying to defend their use of that law by actually proving it:  the
    audience would simply ssume this was one of those bits of logical
    sleight-of-hand where the wool is pulled over one's eyes by some
    sophistry that goes against common sense.

Anyway, to make a long story elliptic,
here is one of my current write-ups on
Peirce's Law that led me back into this
old briar patch:

http://www.mywikibiz.com/Peirce's_law

More to say on this later, but I just wanted to get
a good chunk of the background set out in one place.

Logical Graph Sandbox : Very Rough Sand Reckoning

More thoughts on Peirce's law

1-way version\[((a \Rightarrow b) \Rightarrow a) \Rightarrow a\]

        a o--o b
          |
          o--o a
          |
          o--o a
          |
          @         =         @

2-way version\[((a \Rightarrow b) \Rightarrow a) \Leftrightarrow a\]

        a o--o b
          |
          o--o a
          |                   a
          @         =         @

Compare with:

        a o b
          |
          o--o a
          |                   a
          @         =         @

That is:

       ab   a
        o   o
         \ /
          o
          |                   a
          @         =         @

This is the so-called absorption law, commonly written in the following ways:

\[(a \land b) \lor a \iff a\]

\[ab \lor a = a\]

Reports of my counter-intuitiveness are greatly exaggerated

We have the following theorem of classical propositional calculus:

\[(p \Rightarrow q) \lor (q \Rightarrow p)\]

The proposition may appear counter-intuitive on some ways of reading it, and it is usually excluded from the theorems of intuitionistic propositional calculi.

Read as a statement about the values of propositions — where the values \(p, q\!\) are drawn from the boolean domain \(\mathbb{B} = \{0, 1 \}\) — and written as an order law, its sense may become more sensible:

\[(p \le q) \lor (q \le p)\]

Here it is in logical graphs:

      q o   o p
        |   |
      p o   o q
        |   |
        o   o
         \ /
          o
          |
          @         =         @

Proof

      q o   o p
        |   |
      p o   o q
        |   |       q   p
        o   o       o   o       o   o         o           o
         \ /         \ /         \ /          |           |
          o        pq o        pq o        pq o           o
          |           |           |           |           |
          @     =     @     =     @     =     @     =     @     =     @

My guess as to what's going on here — why the classical and intuitionistic reasoners appear to be talking past each other on this score — is that they are really talking about two different domains of mathematical objects. That is, the variables \(p, q\!\) range over \(\mathbb{B}\) in the classical reading while they range over a space of propositions, say, \(p, q : X \to \mathbb{B}\) in the intuitionistic reading of the formulas. Just my initial guess.

On the reading \(P, Q : X \to \mathbb{B},\) another guess at what's gone awry here might be the difference between the following two statements:

\((\forall x \in X)(Px \Rightarrow Qx) \lor (Qx \Rightarrow Px)\)

\((\forall x \in X)(Px \Rightarrow Qx) \lor (\forall x \in X)(Qx \Rightarrow Px)\)

But the latter is not a theorem in anyone's philosophy, so there is really no disagreement here.

Functional quantifiers

Tables

The auxiliary notations:

\[\alpha_i f = \Upsilon (f_i, f),\!\]

\[\beta_i f = \Upsilon (f, f_i),\!\]

define two series of measures:

\[\alpha_i, \beta_i : (\mathbb{B}^2 \to \mathbb{B}) \to \mathbb{B},\]

incidentally providing compact names for the column headings of the next two Tables.

Table 1. Qualifiers of Implication Ordering:  \(\alpha_i f = \Upsilon (f_i \Rightarrow f)\)
\(p:\)
\(q:\)
1100
1010
\(f\!\) \(\alpha_{15}\) \(\alpha_{14}\) \(\alpha_{13}\) \(\alpha_{12}\) \(\alpha_{11}\) \(\alpha_{10}\) \(\alpha_9\) \(\alpha_8\) \(\alpha_7\) \(\alpha_6\) \(\alpha_5\) \(\alpha_4\) \(\alpha_3\) \(\alpha_2\) \(\alpha_1\) \(\alpha_0\)
\(f_0\) 0000 \((~)\)                               1
\(f_1\) 0001 \((p)(q)\!\)                             1 1
\(f_2\) 0010 \((p) q\!\)                           1   1
\(f_3\) 0011 \((p)\!\)                         1 1 1 1
\(f_4\) 0100 \(p (q)\!\)                       1       1
\(f_5\) 0101 \((q)\!\)                     1 1     1 1
\(f_6\) 0110 \((p, q)\!\)                   1   1   1   1
\(f_7\) 0111 \((p q)\!\)                 1 1 1 1 1 1 1 1
\(f_8\) 1000 \(p q\!\)               1               1
\(f_9\) 1001 \(((p, q))\!\)             1 1             1 1
\(f_{10}\) 1010 \(q\!\)           1   1           1   1
\(f_{11}\) 1011 \((p (q))\!\)         1 1 1 1         1 1 1 1
\(f_{12}\) 1100 \(p\!\)       1       1       1       1
\(f_{13}\) 1101 \(((p) q)\!\)     1 1     1 1     1 1     1 1
\(f_{14}\) 1110 \(((p)(q))\!\)   1   1   1   1   1   1   1   1
\(f_{15}\) 1111 \(((~))\) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1


Table 2. Qualifiers of Implication Ordering:  \(\beta_i f = \Upsilon (f \Rightarrow f_i)\)
\(p:\)
\(q:\)
1100
1010
\(f\!\) \(\beta_0\) \(\beta_1\) \(\beta_2\) \(\beta_3\) \(\beta_4\) \(\beta_5\) \(\beta_6\) \(\beta_7\) \(\beta_8\) \(\beta_9\) \(\beta_{10}\) \(\beta_{11}\) \(\beta_{12}\) \(\beta_{13}\) \(\beta_{14}\) \(\beta_{15}\)
\(f_0\) 0000 \((~)\) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
\(f_1\) 0001 \((p)(q)\!\)   1   1   1   1   1   1   1   1
\(f_2\) 0010 \((p) q\!\)     1 1     1 1     1 1     1 1
\(f_3\) 0011 \((p)\!\)       1       1       1       1
\(f_4\) 0100 \(p (q)\!\)         1 1 1 1         1 1 1 1
\(f_5\) 0101 \((q)\!\)           1   1           1   1
\(f_6\) 0110 \((p, q)\!\)             1 1             1 1
\(f_7\) 0111 \((p q)\!\)               1               1
\(f_8\) 1000 \(p q\!\)                 1 1 1 1 1 1 1 1
\(f_9\) 1001 \(((p, q))\!\)                   1   1   1   1
\(f_{10}\) 1010 \(q\!\)                     1 1     1 1
\(f_{11}\) 1011 \((p (q))\!\)                       1       1
\(f_{12}\) 1100 \(p\!\)                         1 1 1 1
\(f_{13}\) 1101 \(((p) q)\!\)                           1   1
\(f_{14}\) 1110 \(((p)(y))\!\)                             1 1
\(f_{15}\) 1111 \(((~))\!\)                               1


Table 3. Simple Qualifiers of Propositions (n = 2)
\(p:\)
\(q:\)
1100
1010
\(f\!\) \((\ell_{11})\)
\(\text{No } p \)
\(\text{is } q \)
\((\ell_{10})\)
\(\text{No } p \)
\(\text{is }(q)\)
\((\ell_{01})\)
\(\text{No }(p)\)
\(\text{is } q \)
\((\ell_{00})\)
\(\text{No }(p)\)
\(\text{is }(q)\)
\( \ell_{00} \)
\(\text{Some }(p)\)
\(\text{is }(q)\)
\( \ell_{01} \)
\(\text{Some }(p)\)
\(\text{is } q \)
\( \ell_{10} \)
\(\text{Some } p \)
\(\text{is }(q)\)
\( \ell_{11} \)
\(\text{Some } p \)
\(\text{is } q \)
\(f_0\) 0000 \((~)\) 1 1 1 1 0 0 0 0
\(f_1\) 0001 \((p)(q)\!\) 1 1 1 0 1 0 0 0
\(f_2\) 0010 \((p) q\!\) 1 1 0 1 0 1 0 0
\(f_3\) 0011 \((p)\!\) 1 1 0 0 1 1 0 0
\(f_4\) 0100 \(p (q)\!\) 1 0 1 1 0 0 1 0
\(f_5\) 0101 \((q)\!\) 1 0 1 0 1 0 1 0
\(f_6\) 0110 \((p, q)\!\) 1 0 0 1 0 1 1 0
\(f_7\) 0111 \((p q)\!\) 1 0 0 0 1 1 1 0
\(f_8\) 1000 \(p q\!\) 0 1 1 1 0 0 0 1
\(f_9\) 1001 \(((p, q))\!\) 0 1 1 0 1 0 0 1
\(f_{10}\) 1010 \(q\!\) 0 1 0 1 0 1 0 1
\(f_{11}\) 1011 \((p (q))\!\) 0 1 0 0 1 1 0 1
\(f_{12}\) 1100 \(p\!\) 0 0 1 1 0 0 1 1
\(f_{13}\) 1101 \(((p) q)\!\) 0 0 1 0 1 0 1 1
\(f_{14}\) 1110 \(((p)(q))\!\) 0 0 0 1 0 1 1 1
\(f_{15}\) 1111 \(((~))\) 0 0 0 0 1 1 1 1


Table 4. Relation of Quantifiers to Higher Order Propositions
\(\text{Mnemonic}\) \(\text{Category}\) \(\text{Classical Form}\) \(\text{Alternate Form}\) \(\text{Symmetric Form}\) \(\text{Operator}\)
\(\text{E}\!\)
\(\text{Exclusive}\)
\(\text{Universal}\)
\(\text{Negative}\)
\(\text{All}\ p\ \text{is}\ (q)\)   \(\text{No}\ p\ \text{is}\ q \) \((\ell_{11})\)
\(\text{A}\!\)
\(\text{Absolute}\)
\(\text{Universal}\)
\(\text{Affirmative}\)
\(\text{All}\ p\ \text{is}\ q \)   \(\text{No}\ p\ \text{is}\ (q)\) \((\ell_{10})\)
    \(\text{All}\ q\ \text{is}\ p \) \(\text{No}\ q\ \text{is}\ (p)\) \(\text{No}\ (p)\ \text{is}\ q \) \((\ell_{01})\)
    \(\text{All}\ (q)\ \text{is}\ p \) \(\text{No}\ (q)\ \text{is}\ (p)\) \(\text{No}\ (p)\ \text{is}\ (q)\) \((\ell_{00})\)
    \(\text{Some}\ (p)\ \text{is}\ (q)\)   \(\text{Some}\ (p)\ \text{is}\ (q)\) \(\ell_{00}\!\)
    \(\text{Some}\ (p)\ \text{is}\ q\)   \(\text{Some}\ (p)\ \text{is}\ q\) \(\ell_{01}\!\)
\(\text{O}\!\)
\(\text{Obtrusive}\)
\(\text{Particular}\)
\(\text{Negative}\)
\(\text{Some}\ p\ \text{is}\ (q)\)   \(\text{Some}\ p\ \text{is}\ (q)\) \(\ell_{10}\!\)
\(\text{I}\!\)
\(\text{Indefinite}\)
\(\text{Particular}\)
\(\text{Affirmative}\)
\(\text{Some}\ p\ \text{is}\ q\)   \(\text{Some}\ p\ \text{is}\ y\) \(\ell_{11}\!\)


Exercises

Express the following formulas in functional terms.

Exercise 1

\((\forall x \in X)(p(x) \Rightarrow q(x))\)

\(\prod_{x \in X} (p_x (q_x)) = 1\)

This is just the form \(\operatorname{All}\ p\ \operatorname{are}\ q,\) already covered here:

Application of Higher Order Propositions to Quantification Theory

Need to think a little more about the proposition \(p \Rightarrow q\) as a boolean function of type \(\mathbb{B}^2 \to \mathbb{B}\) and the corresponding higher order proposition of type \((\mathbb{B}^2 \to \mathbb{B}) \to \mathbb{B}.\)

Exercise 2

\((\forall x \in X)(Px \Rightarrow Qx) \lor (Qx \Rightarrow Px)\)

Exercise 3

\((\forall x \in X)(Px \Rightarrow Qx) \lor (\forall x \in X)(Qx \Rightarrow Px)\)