Difference between revisions of "Directory:Jon Awbrey/Papers/Syntactic Transformations"
Jon Awbrey (talk | contribs) (→Syntactic Transformation Rules: figits) |
Jon Awbrey (talk | contribs) (→Syntactic Transformations: integrate other version) |
||
Line 3: | Line 3: | ||
<div class="nonumtoc">__TOC__</div> | <div class="nonumtoc">__TOC__</div> | ||
− | ==Syntactic Transformations== | + | ====1.3.12. Syntactic Transformations==== |
− | To discuss the import of | + | We have been examining several distinct but closely related notions of ''indication''. To discuss the import of these ideas in greater depth, it serves to establish a number of logical relations and set-theoretic identities that can be found to hold among their roughly parallel arrays of conceptions and constructions. Facilitating this task requires in turn a number of auxiliary concepts and notations. The notions of indication in question are expressed in a variety of different notations, enumerated as follows: |
− | The | + | # The functional language of propositions |
+ | # The logical language of sentences | ||
+ | # The geometric language of sets | ||
+ | |||
+ | Thus, one way to explain the relationships that exist among several concepts of indication is to describe the translations that must hold as a result between the associated families of notation. | ||
===Syntactic Transformation Rules=== | ===Syntactic Transformation Rules=== |
Revision as of 01:06, 3 September 2010
1.3.12. Syntactic Transformations
We have been examining several distinct but closely related notions of indication. To discuss the import of these ideas in greater depth, it serves to establish a number of logical relations and set-theoretic identities that can be found to hold among their roughly parallel arrays of conceptions and constructions. Facilitating this task requires in turn a number of auxiliary concepts and notations. The notions of indication in question are expressed in a variety of different notations, enumerated as follows:
- The functional language of propositions
- The logical language of sentences
- The geometric language of sets
Thus, one way to explain the relationships that exist among several concepts of indication is to describe the translations that must hold as a result between the associated families of notation.
Syntactic Transformation Rules
A good way to summarize these translations and to organize their use in practice is by means of the syntactic transformation rules (STRs) that partially formalize them. A rudimentary example of a STR is readily mined from the raw materials that are already available in this area of discussion. To begin, let the definition of an indicator function be recorded in the following form:
| |||||||||
| |||||||||
|
In practice, a definition like this is commonly used to substitute one logically equivalent expression or sentence for another in a context where the conditions of using the definition this way are satisfied and where the change is perceived to advance a proof. This employment of a definition can be expressed in the form of a STR that allows one to exchange two expressions of logically equivalent forms for one another in every context where their logical values are the only consideration. To be specific, the logical value of an expression is the value in the boolean domain \(\underline\mathbb{B} = \{ \underline{0}, \underline{1} \} = \{ \operatorname{false}, \operatorname{true} \}\) that the expression stands for in its context or represents to its interpreter.
In the case of Definition 1, the corresponding STR permits one to exchange a sentence of the form \(x \in Q\) with an expression of the form \(\upharpoonleft Q \upharpoonright (x)\) in any context that satisfies the conditions of its use, namely, the conditions of the definition that lead up to the stated equivalence. The relevant STR is recorded in Rule 1. By way of convention, I list the items that fall under a rule roughly in order of their ascending conceptual subtlety or their increasing syntactic complexity, without regard to their normal or typical orders of exchange, since this can vary widely from case to case.
\(\text{Rule 1}\!\) | |||
\(\text{If}\!\) | \(Q \subseteq X\) | ||
\(\text{then}\!\) | \(\upharpoonleft Q \upharpoonright ~:~ X \to \underline\mathbb{B}\) | ||
\(\text{and if}\!\) | \(x \in X\) | ||
\(\text{then}\!\) | \(\text{the following are equivalent:}\!\) | ||
\(\text{R1a.}\!\) | \(x \in Q\) | ||
\(\text{R1b.}\!\) | \(\upharpoonleft Q \upharpoonright (x)\) |
Conversely, any rule of this sort, properly qualified by the conditions under which it applies, can be turned back into a summary statement of the logical equivalence that is involved in its application. This mode of conversion a static principle and a transformational rule, that is, between a statement of equivalence and an equivalence of statements, is so automatic that it is usually not necessary to make a separate note of the "horizontal" versus the "vertical" versions.
As another example of a STR, consider the following logical equivalence, that holds for any \(Q \subseteq X\) and for all \(x \in X.\)
\(\upharpoonleft Q \upharpoonright (x) ~\Leftrightarrow~ \upharpoonleft Q \upharpoonright (x) = \underline{1}.\) |
In practice, this logical equivalence is used to exchange an expression of the form \(\upharpoonleft Q \upharpoonright (x)\) with a sentence of the form \(\upharpoonleft Q \upharpoonright (x) = \underline{1}\) in any context where one has a relatively fixed \(Q \subseteq X\) in mind and where one is conceiving \(x \in X\) to vary over its whole domain, namely, the universe \(X.\!\) This leads to the STR that is given in Rule 2.
\(\text{Rule 2}\!\) | |||
\(\text{If}\!\) | \(f : X \to \underline\mathbb{B}\) | ||
\(\text{and}\!\) | \(x \in X\) | ||
\(\text{then}\!\) | \(\text{the following are equivalent:}\!\) | ||
\(\text{R2a.}\!\) | \(f(x)\!\) | ||
\(\text{R2b.}\!\) | \(f(x) = \underline{1}\) |
Rules like these can be chained together to establish extended rules, just so long as their antecedent conditions are compatible. For example, Rules 1 and 2 combine to give the equivalents that are listed in Rule 3. This follows from a recognition that the function \(\upharpoonleft Q \upharpoonright ~:~ X \to \underline\mathbb{B}\) that is introduced in Rule 1 is an instance of the function \(f : X \to \underline\mathbb{B}\) that is mentioned in Rule 2. By the time one arrives in the "consequence box" of either Rule, then, one has in mind a comparatively fixed \(Q \subseteq X,\) a proposition \(f\!\) or \(\upharpoonleft Q \upharpoonright\) about things in \(X,\!\) and a variable argument \(x \in X.\)
| ||||||||||||||||||||
| ||||||||||||||||||||
|
A large stock of rules can be derived in this way, by chaining together segments that are selected from a stock of previous rules, with perhaps the whole process of derivation leading back to an axial body or a core stock of rules, with all recurring to and relying on an axiomatic basis. In order to keep track of their derivations, as their pedigrees help to remember the reasons for trusting their use in the first place, derived rules can be annotated by citing the rules from which they are derived.
In the present discussion, I am using a particular style of annotation for rule derivations, one that is called proof by grammatical paradigm or proof by syntactic analogy. The annotations in the right hand margin of the Rule Box interweave the numerators and the denominators of the paradigm being employed, in other words, the alternating terms of comparison in a sequence of analogies. Taking the syntactic transformations marked in the Rule Box one at a time, each step is licensed by its formal analogy to a previously established rule.
For example, the annnotation \(X_1 : A_1 :: X_2 : A_2\!\) may be read to say that \(X_1\!\) is to \(A_1\!\) as \(X_2\!\) is to \(A_2,\!\) where the step from \(A_1\!\) to \(A_2\!\) is permitted by a previously accepted rule.
This can be illustrated by considering the derivation of Rule 3 in the augmented form that follows:
\(\begin{array}{lcclc} \text{R3a.} & x \in Q & \text{is to} & \text{R1a.} & x \in Q \\[6pt] & & \text{as} & & \\[6pt] \text{R3b.} & \upharpoonleft Q \upharpoonright (x) & \text{is to} & \text{R1b.} & \upharpoonleft Q \upharpoonright (x) \\[6pt] & & \text{and} & & \\[6pt] \text{R3b.} & \upharpoonleft Q \upharpoonright (x) & \text{is to} & \text{R2a.} & f(x) \\[6pt] & & \text{as} & & \\[6pt] \text{R3c.} & \upharpoonleft Q \upharpoonright (x) = \underline{1} & \text{is to} & \text{R2b.} & f(x) = \underline{1} \end{array}\) |
Notice how the sequence of analogies pivots on the term \(\text{R3b},\!\) viewing it first under the aegis of \(\text{R1b},\!\) as the second term of the first analogy, and then turning to view it again under the guise of \(\text{R2a},\!\) as the first term of the second analogy.
By way of convention, rules that are tailored to a particular application, case, or subject, and rules that are adapted to a particular goal, object, or purpose, I frequently refer to as Facts.
Besides linking rules together into extended sequences of equivalents, there is one other way that is commonly used to get new rules from old. Novel starting points for rules can be obtained by extracting pairs of equivalent expressions from a sequence that falls under an established rule and then stating their equality in the appropriate form of equation.
For example, extracting the expressions \(\text{R3a}\!\) and \(\text{R3c}\!\) that are given as equivalents in Rule 3 and explicitly stating their equivalence produces the equation recorded in Corollary 1.
\(\text{Corollary 1}\!\) | |||
\(\text{If}\!\) | \(Q \subseteq X\) | ||
\(\text{and}\!\) | \(x \in X\) | ||
\(\text{then}\!\) | \(\text{the following statement is true:}\!\) | ||
\(\text{C1a.}\!\) |
\(x \in Q ~\Leftrightarrow~ \upharpoonleft Q \upharpoonright (x) = \underline{1}\) |
\(\text{R3a} \Leftrightarrow \text{R3c}\) |
There are a number of issues, that arise especially in establishing the proper use of STRs, that are appropriate to discuss at this juncture. The notation \(\downharpoonleft s \downharpoonright\) is intended to represent the proposition denoted by the sentence \(s.\!\) There is only one problem with the use of this form. There is, in general, no such thing as "the" proposition denoted by \(s.\!\) Generally speaking, if a sentence is taken out of context and considered across a variety of different contexts, there is no unique proposition that it can be said to denote. But one is seldom ever speaking at the maximum level of generality, or even found to be thinking of it, and so this notation is usually meaningful and readily understandable whenever it is read in the proper frame of mind. Still, once the issue is raised, the question of how these meanings and understandings are possible has to be addressed, especially if one desires to express the regulations of their syntax in a partially computational form. This requires a closer examination of the very notion of context, and it involves engaging in enough reflection on the contextual evaluation of sentences that the relevant principles of its successful operation can be discerned and rationalized in explicit terms.
A sentence that is written in a context where it represents a value of \(\underline{1}\) or \(\underline{0}\) as a function of things in the universe \(X,\!\) where it stands for a value of \(\operatorname{truth}\) or \(\operatorname{falsehood},\) depending on how the signs that constitute its proper syntactic arguments are interpreted as denoting objects in \(X,\!\) in other words, where it is bound to lead its interpreter to view its own truth or falsity as determined by a choice of objects in \(X,\!\) is a sentence that might as well be written in the context \(\downharpoonleft \ldots \downharpoonright,\) whether this frame is explicitly marked around it or not.
More often than not, the context of interpretation fixes the denotations of most of the signs that make up a sentence, and so it is safe to adopt the convention that only those signs whose objects are not already fixed are free to vary in their denotations. Thus, only the signs that remain in default of prior specification are subject to treatment as variables, with a decree of functional abstraction hanging over all of their heads.
\(\downharpoonleft x \in Q \downharpoonright ~=~ \lambda (x, \in, Q).(x \in Q).\) |
Going back to Rule 1, we see that it lists a pair of concrete sentences and authorizes exchanges in either direction between the syntactic structures that have these two forms. But a sentence is any sign that denotes a proposition, and so there are any number of less obvious sentences that can be added to this list, extending the number of items that are licensed to be exchanged. For example, a larger collection of equivalent sentences is recorded in Rule 4.
\(\text{Rule 4}\!\) | |||
\(\text{If}\!\) | \(Q \subseteq X ~\text{is fixed}\) | ||
\(\text{and}\!\) | \(x \in X ~\text{is varied}\) | ||
\(\text{then}\!\) | \(\text{the following are equivalent:}\!\) | ||
\(\text{R4a.}\!\) | \(x \in Q\) | ||
\(\text{R4b.}\!\) | \(\downharpoonleft x \in Q \downharpoonright\) | ||
\(\text{R4c.}\!\) | \(\downharpoonleft x \in Q \downharpoonright (x)\) | ||
\(\text{R4d.}\!\) | \(\upharpoonleft Q \upharpoonright (x)\) | ||
\(\text{R4e.}\!\) | \(\upharpoonleft Q \upharpoonright (x) = \underline{1}\) |
The first and last items on this list, namely, the sentence \(\text{R4a}\!\) stating \(x \in Q\) and the sentence \(\text{R4e}\!\) stating \(\upharpoonleft Q \upharpoonright (x) = \underline{1},\) are just the pair of sentences from Rule 3 whose equivalence for all \(x \in X\) is usually taken to define the idea of an indicator function \(\upharpoonleft Q \upharpoonright ~:~ X \to \underline\mathbb{B}.\) At first sight, the inclusion of the other items appears to involve a category confusion, in other words, to mix the modes of interpretation and to create an array of mismatches between their ostensible types and the ruling type of a sentence. On reflection, and taken in context, these problems are not as serious as they initially seem. For example, the expression \(^{\backprime\backprime} \downharpoonleft x \in Q \downharpoonright \, ^{\prime\prime}\) ostensibly denotes a proposition, but if it does, then it evidently can be recognized, by virtue of this very fact, to be a genuine sentence. As a general rule, if one can see it on the page, then it cannot be a proposition but can at most be a sign of one.
The use of the basic logical connectives can be expressed in the form of a STR as follows:
| |||||||||||||||
| |||||||||||||||
|
As a general rule, the application of a STR involves the recognition of an antecedent condition and the facilitation of a consequent condition. The antecedent condition is a state whose initial expression presents a match, in a formal sense, to one of the sentences that are listed in the STR, and the consequent condition is achieved by taking its suggestions seriously, in other words, by following its sequence of equivalents and implicants to some other link in its chain.
Generally speaking, the application of a rule involves the recognition of an antecedent condition as a case that falls under a clause of the rule. This means that the antecedent condition is able to be captured in the form, conceived in the guise, expressed in the manner, grasped in the pattern, or recognized in the shape of one of the sentences in a list of equivalents or a chain of implicants.
A condition is amenable to a rule if any of its conceivable expressions formally match any of the expressions that are enumerated by the rule. Further, it requires the relegation of the other expressions to the production of a result. Thus, there is the choice of an initial expression that needs to be checked on input for whether it fits the antecedent condition and there are several types of output that are generated as a consequence, only a few of which are usually needed at any given time.
Editing Note. Need a transition here. Give a brief description of the Tables of Translation Rules that have now been moved to the Appendices, and then move on to the rest of the Definitions and Proof Schemata.
A rule that allows one to turn equivalent sentences into identical propositions:
\((S \Leftrightarrow T) \quad \Leftrightarrow \quad (\downharpoonleft S \downharpoonright = \downharpoonleft T \downharpoonright)\) |
Compare:
\(\downharpoonleft v = w \downharpoonright (v, w)\) |
\(\downharpoonleft v(u) = w(u) \downharpoonright (u)\) |
Editing Note. The last draft I can find has 5 variants for the next box, "Value Rule 1", and I can't tell right off which I meant to use. Until I can get back to this, here's a link to the collection of variants:
| ||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||
|
| ||||||
| ||||||
|
| ||||||
| ||||||
|
| ||||||
| ||||||
|
| ||||||
| ||||||
|
Given an indexed set of sentences, \(s_j\!\) for \(j \in J,\) it is possible to consider the logical conjunction of the corresponding propositions. Various notations for this concept are be useful in various contexts, a sufficient sample of which are recorded in Definition 6.
| |||||||||
| |||||||||
|
| ||||||
| ||||||
|
| ||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||
| ||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
|
Editing Note. Check earlier and later drafts to see where \(\text{P1a, P1b, P1c}~\) came from. Are these just placeholders for the Value or Evaluation Rules?
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
|
For instance, the observation that expresses the equality of sets in terms of their indicator functions can be formalized according to the pattern in Rule 9, namely, at lines R9a, R9b, and R9c, and these components of Rule 9 can be cited in future uses by their indices in this list. Using Rule 7, annotated as R7, to adduce a few properties of indicator functions to the account, it is possible to extend Rule 9 by another few steps, referenced as R9d, R9e, R9f, and R9g.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
|
An application of Rule 11 involves the recognition of an antecedent condition as a case under the Rule, that is, as a condition that matches one of the sentences in the Rule's chain of equivalents, and it requires the relegation of the other expressions to the production of a result. Thus, there is the choice of an initial expression that has to be checked on input for whether it fits the antecedent condition, and there is the choice of three types of output that are generated as a consequence, only one of which is generally needed at any given time. More often than not, though, a rule is applied in only a few of its possible ways. The usual antecedent and the usual consequents for Rule 11 can be distinguished in form and specialized in practice as follows:
\(\operatorname{R11a}\) marks the usual starting place for an application of the Rule, that is, the standard form of antecedent condition that is likely to lead to an invocation of the Rule. |
\(\operatorname{R11b}\) records the trivial consequence of applying the up-spar operator \(\upharpoonleft \cdots \upharpoonright\) to both sides of the initial equation. |
\(\operatorname{R11c}\) gives a version of the indicator function with \(\upharpoonleft X \upharpoonright ~\subseteq~ X \times \underline\mathbb{B},\) called the extensional or relational form of the indicator function. |
\(\operatorname{R11d}\) gives a version of the indicator function with \(\upharpoonleft X \upharpoonright ~:~ X \to \underline\mathbb{B},\) called its functional form. |
Applying Rule 9, Rule 8, and the Logical Rules to the special case where \(s \Leftrightarrow (X = Y),\) one obtains the following general Fact:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Derived Equivalence Relations
One seeks a method of general application for approaching the individual sign relation, a way to select an aspect of its form, to analyze it with regard to its intrinsic structure, and to classify it in comparison with other sign relations. With respect to a particular sign relation, one approach that presents itself is to examine the relation between signs and interpretants that is given directly by its connotative component and to compare it with the various forms of derived, indirect, mediate, or peripheral relationships that can be found to exist among signs and interpretants by way of secondary considerations or subsequent studies. Of especial interest are the relationships among signs and interpretants that can be obtained by working through the collections of objects that they commonly or severally denote.
A classic way of showing that two sets are equal is to show that every element of the first belongs to the second and that every element of the second belongs to the first. The problem with this strategy is that one can exhaust a considerable amount of time trying to prove that two sets are equal before it occurs to one to look for a counterexample, that is, an element of the first that does not belong to the second or an element of the second that does not belong to the first, in cases where that is precisely what one ought to be seeking. It would be nice if there were a more balanced, impartial, or neutral way to go about this task, one that did not require such an undue commitment to either side, a technique that helps to pinpoint the counterexamples when they exist, and a method that keeps in mind the original relation of proving that and showing that to probing, testing, and seeing whether.
A different way of seeing that two sets are equal, or of seeing whether two sets are equal, is based on the following observation:
Two sets are equal as sets |
\(\iff\) |
The indicator functions of the two sets are equal as functions |
\(\iff\) |
The values of the two indicator functions are equal to each other on all domain elements. |
It is important to notice the hidden quantifier, of a universal kind, that lurks in all three equivalent statements but is only revealed in the last.
In making the next set of definitions and in using the corresponding terminology it is taken for granted that all of the references of signs are relative to a particular sign relation \(L \subseteq O \times S \times I\) that either remains to be specified or is already understood. Further, I continue to assume that \(S = I,\!\) in which case this set is called the syntactic domain of \(L.\!\)
In the following definitions, let \(L \subseteq O \times S \times I,\) let \(S = I,\!\) and let \(x, y \in S.\!\)
Recall the definition of \(\operatorname{Con} (L),\) the connotative component of a sign relation \(L,\!\) in the following form:
\(\operatorname{Con} (L) ~=~ L_{SI} ~=~ \{ (s, i) \in S \times I ~:~ (o, s, i) \in L ~\text{for some}~ o \in O \}.\) |
Equivalent expressions for this concept are recorded in Definition 8.
| |||||||||||||||
| |||||||||||||||
|
Editing Note. Need a discussion of converse relations here. Perhaps it would work to introduce the operators that Peirce used for the converse of a dyadic relative \(\ell,\) namely, \(K\ell ~=~ k\!\cdot\!\ell ~=~ \breve\ell.\)
The dyadic relation \(L_{IS}\!\) that is the converse of the connotative relation \(L_{SI}\!\) can be defined directly in the following fashion:
\(\overset{\smile}{\operatorname{Con}(L)} ~=~ L_{IS} ~=~ \{ (i, s) \in I \times S ~:~ (o, s, i) \in L ~\text{for some}~ o \in O \}.\) |
A few of the many different expressions for this concept are recorded in Definition 9.
| |||||||||||||||||||||
| |||||||||||||||||||||
|
Recall the definition of \(\operatorname{Den} (L),\) the denotative component of \(L,\!\) in the following form:
\(\operatorname{Den} (L) ~=~ L_{OS} ~=~ \{ (o, s) \in O \times S ~:~ (o, s, i) \in L ~\text{for some}~ i \in I \}.\) |
Equivalent expressions for this concept are recorded in Definition 10.
| |||||||||||||||
| |||||||||||||||
|
The dyadic relation \(L_{SO}\!\) that is the converse of the denotative relation \(L_{OS}\!\) can be defined directly in the following fashion:
\(\overset{\smile}{\operatorname{Den}(L)} ~=~ L_{SO} ~=~ \{ (s, o) \in S \times O ~:~ (o, s, i) \in L ~\text{for some}~ i \in I \}.\) |
A few of the many different expressions for this concept are recorded in Definition 11.
| |||||||||||||||||||||
| |||||||||||||||||||||
|
The denotation of \(x\!\) in \(L,\!\) written \(\operatorname{Den}(L, x),\) is defined as follows:
\(\operatorname{Den}(L, x) ~=~ \{ o \in O ~:~ (o, x) \in \operatorname{Den}(L) \}.\) |
In other words:
\(\operatorname{Den}(L, x) ~=~ \{ o \in O ~:~ (o, x, i) \in L ~\text{for some}~ i \in I \}.\) |
Equivalent expressions for this concept are recorded in Definition 12.
| ||||||||||||||||||||||||
| ||||||||||||||||||||||||
|
Signs are equiferent if they refer to all and only the same objects, that is, if they have exactly the same denotations. In other language for the same relation, signs are said to be denotatively equivalent or referentially equivalent, but it is probably best to check whether the extension of this concept over the syntactic domain is really a genuine equivalence relation before jumping to the conclusions that are implied by these latter terms.
To define the equiference of signs in terms of their denotations, one says that \(x\!\) is equiferent to \(y\!\) under \(L,\!\) and writes \(x ~\overset{L}{=}~ y,\!\) to mean that \(\operatorname{Den}(L, x) = \operatorname{Den}(L, y).\) Taken in extension, this notion of a relation between signs induces an equiference relation on the syntactic domain.
For each sign relation \(L,\!\) this yields a binary relation \(\operatorname{Der}(L) \subseteq S \times I\) that is defined as follows:
\(\operatorname{Der}(L) ~=~ Der^L ~=~ \{ (x, y) \in S \times I ~:~ \operatorname{Den}(L, x) = \operatorname{Den}(L, y) \}.\) |
These definitions and notations are recorded in the following display.
| ||||||||||||
| ||||||||||||
|
The relation \(\operatorname{Der}(L)\) is defined and the notation \(x ~\overset{L}{=}~ y\) is meaningful in every situation where the corresponding denotation operator \(\operatorname{Den}(-,-)\) makes sense, but it remains to check whether this relation enjoys the properties of an equivalence relation.
-
Reflexive property.
Is it true that \(x ~\overset{L}{=}~ x\) for every \(x \in S = I\)?
By definition, \(x ~\overset{L}{=}~ x\) if and only if \(\operatorname{Den}(L, x) = \operatorname{Den}(L, x).\)
Thus, the reflexive property holds in any setting where the denotations \(\operatorname{Den}(L, x)\) are defined for all signs \(x\!\) in the syntactic domain of \(R.\!\)
-
Symmetric property.
Does \(x ~\overset{L}{=}~ y\) imply \(y ~\overset{L}{=}~ x\) for all \(x, y \in S\)?
In effect, does \(\operatorname{Den}(L, x) = \operatorname{Den}(L, y)\) imply \(\operatorname{Den}(L, y) = \operatorname{Den}(L, x)\) for all signs \(x\!\) and \(y\!\) in the syntactic domain \(S\!\)?
Yes, so long as the sets \(\operatorname{Den}(L, x)\) and \(\operatorname{Den}(L, y)\) are well-defined, a fact which is already being assumed.
-
Transitive property.
Does \(x ~\overset{L}{=}~ y\) and \(y ~\overset{L}{=}~ z\) imply \(x ~\overset{L}{=}~ z\) for all \(x, y, z \in S\)?
To belabor the point, does \(\operatorname{Den}(L, x) = \operatorname{Den}(L, y)\) and \(\operatorname{Den}(L, y) = \operatorname{Den}(L, z)\) imply \(\operatorname{Den}(L, x) = \operatorname{Den}(L, z)\) for all \(x, y, z \in S\)?
Yes, once again, under the stated conditions.
It should be clear at this point that any question about the equiference of signs reduces to a question about the equality of sets, specifically, the sets that are indexed by these signs. As a result, so long as these sets are well-defined, the issue of whether equiference relations induce equivalence relations on their syntactic domains is almost as trivial as it initially appears.
Taken in its set-theoretic extension, a relation of equiference induces a denotative equivalence relation (DER) on its syntactic domain \(S = I.\!\) This leads to the formation of denotative equivalence classes (DECs), denotative partitions (DEPs), and denotative equations (DEQs) on the syntactic domain. But what does it mean for signs to be equiferent?
Notice that this is not the same thing as being semiotically equivalent, in the sense of belonging to a single semiotic equivalence class (SEC), falling into the same part of a semiotic partition (SEP), or having a semiotic equation (SEQ) between them. It is only when very felicitous conditions obtain, establishing a concord between the denotative and the connotative components of a sign relation, that these two ideas coalesce.
In general, there is no necessity that the equiference of signs, that is, their denotational equivalence or their referential equivalence, induces the same equivalence relation on the syntactic domain as that defined by their semiotic equivalence, even though this state of accord seems like an especially desirable situation. This makes it necessary to find a distinctive nomenclature for these structures, for which I adopt the term denotative equivalence relations (DERs). In their train they bring the allied structures of denotative equivalence classes (DECs) and denotative partitions (DEPs), while the corresponding statements of denotative equations (DEQs) are expressible in the form \(x ~\overset{L}{=}~ y.\)
The uses of the equal sign for denoting equations or equivalences are recalled and extended in the following ways:
- If \(E\!\) is an arbitrary equivalence relation, then the equation \(x =_E y\!\) means that \((x, y) \in E.\)
- If \(L\!\) is a sign relation such that \(L_{SI}\!\) is a SER on \(S = I,\!\) then the semiotic equation \(x =_L y\!\) means that \((x, y) \in L_{SI}.\)
- If \(L\!\) is a sign relation such that \(F\!\) is its DER on \(S = I,\!\) then the denotative equation \(x ~\overset{L}{=}~ y\) means that \((x, y) \in F,\) in other words, that \(\operatorname{Den}(L, x) = \operatorname{Den}(L, y).\)
The use of square brackets for denoting equivalence classes is recalled and extended in the following ways:
- If \(E\!\) is an arbitrary equivalence relation, then \([x]_E\!\) is the equivalence class of \(x\!\) under \(E.\!\)
- If \(L\!\) is a sign relation such that \(\operatorname{Con}(L)\) is a SER on \(S = I,\!\) then \([x]_L\!\) is the SEC of \(x\!\) under \(\operatorname{Con}(L).\)
- If \(L\!\) is a sign relation such that \(\operatorname{Der}(L)\) is a DER on \(S = I,\!\) then \([x]^L\!\) is the DEC of \(x\!\) under \(\operatorname{Der}(L).\)
By applying the form of Fact 1 to the special case where \(X = \operatorname{Den}(L, x)\) and \(Y = \operatorname{Den}(L, y),\) one obtains the following facts.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
|
Digression on Derived Relations
A better understanding of derived equivalence relations (DERs) can be achieved by placing their constructions within a more general context and thus comparing the associated type of derivation operation, namely, the one that takes a triadic relation \(L\!\) into a dyadic relation \(\operatorname{Der}(L),\) with other types of operations on triadic relations. The proper setting would permit a comparative study of all their constructions from a basic set of projections and a full array of compositions on dyadic relations.
To that end, let the derivation \(\operatorname{Der}(L)\) be expressed in the following way:
\(\upharpoonleft \operatorname{Der}(L) \upharpoonright (x, y) \quad = \quad \underset{o \in O}{\operatorname{Conj}} ~\underline{((}~ \upharpoonleft L_{SO} \upharpoonright (x, o) ~,~ \upharpoonleft L_{OS} \upharpoonright (o, y) ~\underline{))}~.\) |
From this may be abstracted a way of composing two dyadic relations that have a domain in common. For example, let \(P \subseteq X \times M\) and \(Q \subseteq M \times Y\) be dyadic relations that have the middle domain \(M\!\) in common. Then we may define a form of composition, notated \(P \circeq Q,\) where \(P \circeq Q ~\subseteq~ X \times Y\) is defined as follows:
\(\upharpoonleft P \circeq Q \upharpoonright (x, y) \quad = \quad \underset{m \in M}{\operatorname{Conj}} ~\underline{((}~ \upharpoonleft P \upharpoonright (x, m) ~,~ \upharpoonleft Q \upharpoonright (m, y) ~\underline{))}~.\) |
Compare this with the usual form of composition, typically notated \(P \circ Q\) and defined as follows:
\(\upharpoonleft P \circ Q \upharpoonright (x, y) \quad = \quad \underset{m \in M}{\operatorname{Disj}} ~\upharpoonleft P \upharpoonright (x, m) ~\cdot~ \upharpoonleft Q \upharpoonright (m, y)~.\) |
Appendices
Logical Translation Rule 1
| ||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||
|
Geometric Translation Rule 1
| ||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||
|
Logical Translation Rule 2
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Geometric Translation Rule 2
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Document History
| Subject: Inquiry Driven Systems : An Inquiry Into Inquiry | Contact: Jon Awbrey | Version: Draft 8.70 | Created: 23 Jun 1996 | Revised: 06 Jan 2002 | Advisor: M.A. Zohdy | Setting: Oakland University, Rochester, Michigan, USA | Excerpt: Section 1.3.10 (Recurring Themes) | Excerpt: Subsections 1.3.10.8 - 1.3.10.13