Directory talk:Jon Awbrey/Papers/Syntactic Transformations
Alternate Version : Needs To Be Reconciled
1.3.12. Syntactic Transformations
1.3.12.1. Syntactic Transformation Rules
A good way to summarize the necessary translations between different styles of indication, and along the way to organize their use in practice, is by means of the rules of syntactic transformation (ROSTs) that partially formalize the translations in question.
Rudimentary examples of ROSTs are readily mined from the raw materials that are already available in this area of discussion. To begin as near the beginning as possible, let the definition of an indicator function be recorded in the following form:
o-------------------------------------------------o | Definition 1. Indicator Function | o-------------------------------------------------o | | | If Q c X, | | | | then -{Q}- : X -> %B% | | | | such that, for all x in X: | | | o-------------------------------------------------o | | | D1a. -{Q}-(x) <=> x in Q. | | | o-------------------------------------------------o
In practice, a definition like this is commonly used to substitute one of two logically equivalent expressions or sentences for the other in a context where the conditions of using the definition in this way are satisfied and where the change is perceived as potentially advancing a proof. The employment of a definition in this way can be expressed in the form of a ROST that allows one to exchange two expressions of logically equivalent forms for one another in every context where their logical values are the only consideration. To be specific, the logical value of an expression is the value in the boolean domain %B% = {%0%, %1%} that the expression represents to its context or that it stands for in its context.
In the case of Definition 1, the corresponding ROST permits one to exchange a sentence of the form "x in Q" with an expression of the form "-{Q}-(x)" in any context that satisfies the conditions of its use, namely, the conditions of the definition that lead up to the stated equivalence. The relevant ROST is recorded in Rule 1. By way of convention, I list the items that fall under a rule in rough order of their ascending conceptual subtlety or their increasing syntactic complexity, without regard for the normal or the typical orders of their exchange, since this can vary from widely from case to case.
o-------------------------------------------------o | Rule 1 | o-------------------------------------------------o | | | If Q c X, | | | | then -{Q}- : X -> %B%, | | | | and if x in X, | | | | then the following are equivalent: | | | o-------------------------------------------------o | | | R1a. x in Q. | | | | R1b. -{Q}-(x). | | | o-------------------------------------------------o
Conversely, any rule of this sort, properly qualified by the conditions under which it applies, can be turned back into a summary statement of the logical equivalence that is involved in its application. This mode of conversion between a static principle and a transformational rule, in other words, between a statement of equivalence and an equivalence of statements, is so automatic that it is usually not necessary to make a separate note of the "horizontal" versus the "vertical" versions of what amounts to the same abstract principle.
As another example of a ROST, consider the following logical equivalence, that holds for any \(X \subseteq U\!\) and for all \(u \in U.\)
- -{X}-(u) <=> -{X}-(u) = 1.
In practice, this logical equivalence is used to exchange an expression of the form "-{X}-(u)" with a sentence of the form "-{X}-(u) = 1" in any context where one has a relatively fixed X c U in mind and where one is conceiving u in U to vary over its whole domain, namely, the universe U. This leads to the ROST that is given in Rule 2.
o-------------------------------------------------o | Rule 2 | o-------------------------------------------------o | | | If f : U -> %B% | | | | and u in U, | | | | then the following are equivalent: | | | o-------------------------------------------------o | | | R2a. f(u). | | | | R2b. f(u) = 1. | | | o-------------------------------------------------o
Rules like these can be chained together to establish extended rules, just so long as their antecedent conditions are compatible. For example, Rules 1 and 2 combine to give the equivalents that are listed in Rule 3. This follows from a recognition that the function -{X}- : U -> %B% that is introduced in Rule 1 is an instance of the function f : U -> %B% that is mentioned in Rule 2. By the time one arrives in the "consequence box" of either Rule, then, one has in mind a comparatively fixed X c U, a proposition f or -{X}- about things in U, and a variable argument u in U.
o-------------------------------------------------o---------o | Rule 3 | | o-------------------------------------------------o---------o | | | | If X c U | | | | | | and u in U, | | | | | | then the following are equivalent: | | | | | o-------------------------------------------------o---------o | | | | R3a. u in X. | : R1a | | | :: | | R3b. -{X}-(u). | : R1b | | | : R2a | | | :: | | R3c. -{X}-(u) = 1. | : R2b | | | | o-------------------------------------------------o---------o
A large stock of rules can be derived in this way, by chaining together segments that are selected from a stock of previous rules, with perhaps the whole process of derivation leading back to an axial body or a core stock of rules, with all recurring to and relying on an axiomatic basis. In order to keep track of their derivations, as their pedigrees help to remember the reasons for trusting their use in the first place, derived rules can be annotated by citing the rules from which they are derived.
In the present discussion, I am using a particular style of annotation for rule derivations, one that is called "proof by grammatical paradigm" or "proof by syntactic analogy". The annotations in the right margin of the Rule box can be read as the "denominators" of the paradigm that is being employed, in other words, as the alternating terms of comparison in a sequence of analogies. This can be illustrated by considering the derivation Rule 3 in detail. Taking the steps marked in the box one at a time, one can interweave the applications in the central body of the box with the annotations in the right margin of the box, reading "is to" for the ":" sign and "as" for the "::" sign, in the following fashion:
R3a. "u in X" is to R1a, namely, "u in X", as R3b. "{X}(u)" is to R1b, namely, "{X}(u)", and "{X}(u)" is to R2a, namely, "f(u)", as R3c. "{X}(u) = 1" is to R2b, namely, "f(u) = 1".
Notice how the sequence of analogies pivots on the item R3b, viewing it first under the aegis of R1b, as the second term of the first analogy, and then turning to view it again under the guise of R2a, as the first term of the second analogy.
By way of convention, rules that are tailored to a particular application, case, or subject, and rules that are adapted to a particular goal, object, or purpose, I frequently refer to as "Facts".
Besides linking rules together into extended sequences of equivalents, there is one other way that is commonly used to get new rules from old. Novel starting points for rules can be obtained by extracting pairs of equivalent expressions from a sequence that falls under an established rule, and then by stating their equality in the proper form of equation. For example, by extracting the equivalent expressions that are annotated as "R3a" and "R3c" in Rule 3 and by explictly stating their equivalence, one obtains the specialized result that is recorded in Corollary 1.
Corollary 1 If X c U and u C U, then the following statement is true: C1a. u C X <=> {X}(u) = 1. R3a=R3c
There are a number of issues, that arise especially in establishing the proper use of ROSTs, that are appropriate to discuss at this juncture. The notation "[S]" is intended to represent "the proposition denoted by the sentence S". There is only one problem with the use of this form. There is, in general, no such thing as "the" proposition denoted by S. Generally speaking, if a sentence is taken out of context and considered across a variety of different contexts, there is no unique proposition that it can be said to denote. But one is seldom ever speaking at the maximum level of generality, or even found to be thinking of it, and so this notation is usually meaningful and readily understandable whenever it is read in the proper frame of mind. Still, once the issue is raised, the question of how these meanings and understandings are possible has to be addressed, especially if one desires to express the regulations of their syntax in a partially computational form. This requires a closer examination of the very notion of "context", and it involves engaging in enough reflection on the "contextual evaluation" of sentences that the relevant principles of its successful operation can be discerned and rationalized in explicit terms.
A sentence that is written in a context where it represents a value of 1 or 0 as a function of things in the universe U, where it stands for a value of "true" or "false", depending on how the signs that constitute its proper syntactic arguments are interpreted as denoting objects in U, in other words, where it is bound to lead its interpreter to view its own truth or falsity as determined by a choice of objects in U, is a sentence that might as well be written in the context "[ ... ]", whether or not this frame is explicitly marked around it.
More often than not, the context of interpretation fixes the denotations of most of the signs that make up a sentence, and so it is safe to adopt the convention that only those signs whose objects are not already fixed are free to vary in their denotations. Thus, only the signs that remain in default of prior specification are subject to treatment as variables, with a decree of functional abstraction hanging over all of their heads.
- [u C X] = Lambda (u, C, X).(u C X).
As it is presently stated, Rule 1 lists a couple of manifest sentences, and it authorizes one to make exchanges in either direction between the syntactic items that have these two forms. But a sentence is any sign that denotes a proposition, and thus there are a number of less obvious sentences that can be added to this list, extending the number of items that are licensed to be exchanged. Consider the sense of equivalence among sentences that is recorded in Rule 4.
Rule 4 If X c U is fixed and u C U is varied, then the following are equivalent: R4a. u C X. R4b. [u C X]. R4c. [u C X](u). R4d. {X}(u). R4e. {X}(u) = 1.
The first and last items on this list, namely, the sentences "u C X" and "{X}(u) = 1" that are annotated as "R4a" and "R4e", respectively, are just the pair of sentences from Rule 3 whose equivalence for all u C U is usually taken to define the idea of an indicator function {X} : U -> B. At first sight, the inclusion of the other items appears to involve a category confusion, in other words, to mix the modes of interpretation and to create an array of mismatches between their own ostensible types and the ruling type of a sentence. On reflection, and taken in context, these problems are not as serious as they initially seem. For instance, the expression "[u C X]" ostensibly denotes a proposition, but if it does, then it evidently can be recognized, by virtue of this very fact, to be a genuine sentence. As a general rule, if one can see it on the page, then it cannot be a proposition, but can be, at best, a sign of one.
The use of the basic connectives can be expressed in the form of a ROST as follows:
Logical Translation Rule 0 If Sj is a sentence about things in the universe U and Pj is a proposition about things in the universe U such that: L0a. [Sj] = Pj, for all j C J, then the following equations are true: L0b. [ConcJj Sj] = ConjJj [Sj] = ConjJj Pj. L0c. [SurcJj Sj] = SurjJj [Sj] = SurjJj Pj.
As a general rule, the application of a ROST involves the recognition of an antecedent condition and the facilitation of a consequent condition. The antecedent condition is a state whose initial expression presents a match, in a formal sense, to one of the sentences that are listed in the STR, and the consequent condition is achieved by taking its suggestions seriously, in other words, by following its sequence of equivalents and implicants to some other link in its chain.
Generally speaking, the application of a rule involves the recognition of an antecedent condition as a case that falls under a clause of the rule. This means that the antecedent condition is able to be captured in the form, conceived in the guise, expressed in the manner, grasped in the pattern, or recognized in the shape of one of the sentences in a list of equivalents or a chain of implicants.
A condition is "amenable" to a rule if any of its conceivable expressions formally match any of the expressions that are enumerated by the rule. Further, it requires the relegation of the other expressions to the production of a result. Thus, there is the choice of an initial expression that needs to be checked on input for whether it fits the antecedent condition and there are several types of output that are generated as a consequence, only a few of which are usually needed at any given time.
Logical Translation Rule 1 If S is a sentence about things in the universe U and P is a proposition : U -> B, such that: L1a. [S] = P, then the following equations hold: L1b00. [False] = () = 0 : U->B. L1b01. [Not S] = ([S]) = (P) : U->B. L1b10. [S] = [S] = P : U->B. L1b11. [True] = (()) = 1 : U->B.
Geometric Translation Rule 1 If X c U and P : U -> B, such that: G1a. {X} = P, then the following equations hold: G1b00. {{}} = () = 0 : U->B. G1b10. {~X} = ({X}) = (P) : U->B. G1b01. {X} = {X} = P : U->B. G1b11. {U} = (()) = 1 : U->B.
Logical Translation Rule 2 If S, T are sentences about things in the universe U and P, Q are propositions: U -> B, such that: L2a. [S] = P and [T] = Q, then the following equations hold: L2b00. [False] = () = 0 : U->B. L2b01. [Neither S nor T] = ([S])([T]) = (P)(Q). L2b02. [Not S, but T] = ([S])[T] = (P) Q. L2b03. [Not S] = ([S]) = (P). L2b04. [S and not T] = [S]([T]) = P (Q). L2b05. [Not T] = ([T]) = (Q). L2b06. [S or T, not both] = ([S], [T]) = (P, Q). L2b07. [Not both S and T] = ([S].[T]) = (P Q). L2b08. [S and T] = [S].[T] = P.Q. L2b09. [S <=> T] = (([S], [T])) = ((P, Q)). L2b10. [T] = [T] = Q. L2b11. [S => T] = ([S]([T])) = (P (Q)). L2b12. [S] = [S] = P. L2b13. [S <= T] = (([S]) [T]) = ((P) Q). L2b14. [S or T] = (([S])([T])) = ((P)(Q)). L2b15. [True] = (()) = 1 : U->B.
Geometric Translation Rule 2 If X, Y c U and P, Q U -> B, such that: G2a. {X} = P and {Y} = Q, then the following equations hold: G2b00. {{}} = () = 0 : U->B. G2b01. {~X n ~Y} = ({X})({Y}) = (P)(Q). G2b02. {~X n Y} = ({X}){Y} = (P) Q. G2b03. {~X} = ({X}) = (P). G2b04. {X n ~Y} = {X}({Y}) = P (Q). G2b05. {~Y} = ({Y}) = (Q). G2b06. {X + Y} = ({X}, {Y}) = (P, Q). G2b07. {~(X n Y)} = ({X}.{Y}) = (P Q). G2b08. {X n Y} = {X}.{Y} = P.Q. G2b09. {~(X + Y)} = (({X}, {Y})) = ((P, Q)). G2b10. {Y} = {Y} = Q. G2b11. {~(X n ~Y)} = ({X}({Y})) = (P (Q)). G2b12. {X} = {X} = P. G2b13. {~(~X n Y)} = (({X}) {Y}) = ((P) Q). G2b14. {X u Y} = (({X})({Y})) = ((P)(Q)). G2b15. {U} = (()) = 1 : U->B.
Value Rule 1 If v, w C B then "v = w" is a sentence about <v, w> C B2, [v = w] is a proposition : B2 -> B, and the following are identical values in B: V1a. [ v = w ](v, w) V1b. [ v <=> w ](v, w) V1c. ((v , w))
Value Rule 1 If v, w C B, then the following are equivalent: V1a. v = w. V1b. v <=> w. V1c. (( v , w )).
A rule that allows one to turn equivalent sentences into identical propositions:
- (S <=> T) <=> ([S] = [T])
Consider [ v = w ](v, w) and [ v(u) = w(u) ](u)
Value Rule 1 If v, w C B, then the following are identical values in B: V1a. [ v = w ] V1b. [ v <=> w ] V1c. (( v , w ))
Value Rule 1 If f, g : U -> B, and u C U then the following are identical values in B: V1a. [ f(u) = g(u) ] V1b. [ f(u) <=> g(u) ] V1c. (( f(u) , g(u) ))
Value Rule 1 If f, g : U -> B, then the following are identical propositions on U: V1a. [ f = g ] V1b. [ f <=> g ] V1c. (( f , g ))$
Evaluation Rule 1 If f, g : U -> B and u C U, then the following are equivalent: E1a. f(u) = g(u). :V1a :: E1b. f(u) <=> g(u). :V1b :: E1c. (( f(u) , g(u) )). :V1c :$1a :: E1d. (( f , g ))$(u). :$1b
Evaluation Rule 1 If S, T are sentences about things in the universe U, f, g are propositions: U -> B, and u C U, then the following are equivalent: E1a. f(u) = g(u). :V1a :: E1b. f(u) <=> g(u). :V1b :: E1c. (( f(u) , g(u) )). :V1c :$1a :: E1d. (( f , g ))$(u). :$1b
Definition 2 If X, Y c U, then the following are equivalent: D2a. X = Y. D2b. u C X <=> u C Y, for all u C U.
Definition 3 If f, g : U -> V, then the following are equivalent: D3a. f = g. D3b. f(u) = g(u), for all u C U.
Definition 4 If X c U, then the following are identical subsets of UxB: D4a. {X} D4b. {< u, v> C UxB : v = [u C X]}
Definition 5 If X c U, then the following are identical propositions: D5a. {X}. D5b. f : U -> B : f(u) = [u C X], for all u C U.
Given an indexed set of sentences, Sj for j C J, it is possible to consider the logical conjunction of the corresponding propositions. Various notations for this concept are be useful in various contexts, a sufficient sample of which are recorded in Definition 6.
Definition 6 If Sj is a sentence about things in the universe U, for all j C J, then the following are equivalent: D6a. Sj, for all j C J. D6b. For all j C J, Sj. D6c. Conj(j C J) Sj. D6d. ConjJ,j Sj. D6e. ConjJj Sj.
Definition 7 If S, T are sentences about things in the universe U, then the following are equivalent: D7a. S <=> T. D7b. [S] = [T].
Rule 5 If X, Y c U, then the following are equivalent: R5a. X = Y. :D2a :: R5b. u C X <=> u C Y, for all u C U. :D2b :D7a :: R5c. [u C X] = [u C Y], for all u C U. :D7b :??? :: R5d. {< u, v> C UxB : v = [u C X]} = {< u, v> C UxB : v = [u C Y]}. :??? :D5b :: R5e. {X} = {Y}. :D5a
Rule 6 If f, g : U -> V, then the following are equivalent: R6a. f = g. :D3a :: R6b. f(u) = g(u), for all u C U. :D3b :D6a :: R6c. ConjUu (f(u) = g(u)). :D6e
Rule 7 If P, Q : U -> B, then the following are equivalent: R7a. P = Q. :R6a :: R7b. P(u) = Q(u), for all u C U. :R6b :: R7c. ConjUu (P(u) = Q(u)). :R6c :P1a :: R7d. ConjUu (P(u) <=> Q(u)). :P1b :: R7e. ConjUu (( P(u) , Q(u) )). :P1c :$1a :: R7f. ConjUu (( P , Q ))$(u). :$1b
Rule 8 If S, T are sentences about things in the universe U, then the following are equivalent: R8a. S <=> T. :D7a :: R8b. [S] = [T]. :D7b :R7a :: R8c. [S](u) = [T](u), for all u C U. :R7b :: R8d. ConjUu ( [S](u) = [T](u) ). :R7c :: R8e. ConjUu ( [S](u) <=> [T](u) ). :R7d :: R8f. ConjUu (( [S](u) , [T](u) )). :R7e :: R8g. ConjUu (( [S] , [T] ))$(u). :R7f
For instance, the observation that expresses the equality of sets in terms of their indicator functions can be formalized according to the pattern in Rule 9, namely, at lines (a, b, c), and these components of Rule 9 can be cited in future uses as "R9a", "R9b", "R9c", respectively. Using Rule 7, annotated as "R7", to adduce a few properties of indicator functions to the account, it is possible to extend Rule 9 by another few steps, referenced as "R9d", "R9e", "R9f", "R9g".
Rule 9 If X, Y c U, then the following are equivalent: R9a. X = Y. :R5a :: R9b. {X} = {Y}. :R5e :R7a :: R9c. {X}(u) = {Y}(u), for all u C U. :R7b :: R9d. ConjUu ( {X}(u) = {Y}(u) ). :R7c :: R9e. ConjUu ( {X}(u) <=> {Y}(u) ). :R7d :: R9f. ConjUu (( {X}(u) , {Y}(u) )). :R7e :: R9g. ConjUu (( {X} , {Y} ))$(u). :R7f
Rule 10 If X, Y c U, then the following are equivalent: R10a. X = Y. :D2a :: R10b. u C X <=> u C Y, for all u C U. :D2b :R8a :: R10c. [u C X] = [u C Y]. :R8b :: R10d. For all u C U, [u C X](u) = [u C Y](u). :R8c :: R10e. ConjUu ( [u C X](u) = [u C Y](u) ). :R8d :: R10f. ConjUu ( [u C X](u) <=> [u C Y](u) ). :R8e :: R10g. ConjUu (( [u C X](u) , [u C Y](u) )). :R8f :: R10h. ConjUu (( [u C X] , [u C Y] ))$(u). :R8g
Rule 11 If X c U then the following are equivalent: R11a. X = {u C U : S}. :R5a :: R11b. {X} = { {u C U : S} }. :R5e :: R11c. {X} c UxB : {X} = {< u, v> C UxB : v = [S](u)}. :R :: R11d. {X} : U -> B : {X}(u) = [S](u), for all u C U. :R :: R11e. {X} = [S]. :R
An application of Rule 11 involves the recognition of an antecedent condition as a case under the Rule, that is, as a condition that matches one of the sentences in the Rule's chain of equivalents, and it requires the relegation of the other expressions to the production of a result. Thus, there is the choice of an initial expression that has to be checked on input for whether it fits the antecedent condition, and there is the choice of three types of output that are generated as a consequence, only one of which is generally needed at any given time. More often than not, though, a rule is applied in only a few of its possible ways. The usual antecedent and the usual consequents for Rule 11 can be distinguished in form and specialized in practice as follows:
a. R11a marks the usual starting place for an application of the Rule, that is, the standard form of antecedent condition that is likely to lead to an invocation of the Rule.
b. R11b records the trivial consequence of applying the spiny braces to both sides of the initial equation.
c. R11c gives a version of the indicator function with {X} c UxB, called its "extensional form".
d. R11d gives a version of the indicator function with {X} : U->B, called its "functional form".
Applying Rule 9, Rule 8, and the Logical Rules to the special case where S <=> (X = Y), one obtains the following general fact.
Fact 1 If X,Y c U, then the following are equivalent: F1a. S <=> X = Y. :R9a :: F1b. S <=> {X} = {Y}. :R9b :: F1c. S <=> {X}(u) = {Y}(u), for all u C U. :R9c :: F1d. S <=> ConjUu ( {X}(u) = {Y}(u) ). :R9d :R8a :: F1e. [S] = [ ConjUu ( {X}(u) = {Y}(u) ) ]. :R8b :??? :: F1f. [S] = ConjUu [ {X}(u) = {Y}(u) ]. :??? :: F1g. [S] = ConjUu (( {X}(u) , {Y}(u) )). :$1a :: F1h. [S] = ConjUu (( {X} , {Y} ))$(u). :$1b /// {u C U : (f, g)$(u)} = {u C U : (f(u), g(u))} = {u C ///
1.3.12.2. Derived Equivalence Relations
One seeks a method of general application for approaching the individual sign relation, a way to select an aspect of its form, to analyze it with regard to its intrinsic structure, and to classify it in comparison with other sign relations. With respect to a particular sign relation, one approach that presents itself is to examine the relation between signs and interpretants that is given directly by its connotative component and to compare it with the various forms of derived, indirect, mediate, or peripheral relationships that can be found to exist among signs and interpretants by way of secondary considerations or subsequent studies. Of especial interest are the relationships among signs and interpretants that can be obtained by working through the collections of objects that they commonly or severally denote.
A classic way of showing that two sets are equal is to show that every element of the first belongs to the second and that every element of the second belongs to the first. The problem with this strategy is that one can exhaust a considerable amount of time trying to prove that two sets are equal before it occurs to one to look for a counterexample, that is, an element of the first that does not belong to the second or an element of the second that does not belong to the first, in cases where that is precisely what one ought to be seeking. It would be nice if there were a more balanced, impartial, neutral, or nonchalant way to go about this task, one that did not require such an undue commitment to either side, a technique that helps to pinpoint the counterexamples when they exist, and a method that keeps in mind the original relation of "proving that" and "showing that" to probing, testing, and seeing "whether".
A different way of seeing that two sets are equal, or of seeing whether two sets are equal, is based on the following observation:
Two sets are equal as sets <=> the indicator functions of these sets are equal as functions <=> the values of these functions are equal on all domain elements.
It is important to notice the hidden quantifier, of a universal kind, that lurks in all three equivalent statements but is only revealed in the last.
In making the next set of definitions and in using the corresponding terminology it is taken for granted that all of the references of signs are relative to a particular sign relation R c OxSxI that either remains to be specified or is already understood. Further, I continue to assume that S = I, in which case this set is called the "syntactic domain" of R.
In the following definitions let R c OxSxI, let S = I, and let x, y C S.
Recall the definition of Con(R), the connotative component of R, in the following form:
- Con(R) = RSI = {< s, i> C SxI : <o, s, i> C R for some o C O}.
Equivalent expressions for this concept are recorded in Definition 8.
Definition 8 If R c OxSxI, then the following are identical subsets of SxI: D8a. RSI D8b. ConR D8c. Con(R) D8d. PrSI(R) D8e. {< s, i> C SxI : <o, s, i> C R for some o C O}
The dyadic relation RIS that constitutes the converse of the connotative relation RSI can be defined directly in the following fashion:
- Con(R)^ = RIS = {< i, s> C IxS : <o, s, i> C R for some o C O}.
A few of the many different expressions for this concept are recorded in Definition 9.
Definition 9 If R c OxSxI, then the following are identical subsets of IxS: D9a. RIS D9b. RSI^ D9c. ConR^ D9d. Con(R)^ D9e. PrIS(R) D9f. Conv(Con(R)) D9g. {< i, s> C IxS : <o, s, i> C R for some o C O}
Recall the definition of Den(R), the denotative component of R, in the following form:
- Den(R) = ROS = {<o, s> C OxS : <o, s, i> C R for some i C I}.
Equivalent expressions for this concept are recorded in Definition 10.
Definition 10 If R c OxSxI, then the following are identical subsets of OxS: D10a. ROS D10b. DenR D10c. Den(R) D10d. PrOS(R) D10e. {<o, s> C OxS : <o, s, i> C R for some i C I}
The dyadic relation RSO that constitutes the converse of the denotative relation ROS can be defined directly in the following fashion:
- Den(R)^ = RSO = {< s, o> C SxO : <o, s, i> C R for some i C I}.
A few of the many different expressions for this concept are recorded in Definition 11.
Definition 11 If R c OxSxI, then the following are identical subsets of SxO: D11a. RSO D11b. ROS^ D11c. DenR^ D11d. Den(R)^ D11e. PrSO(R) D11f. Conv(Den(R)) D11g. {< s, o> C SxO : <o, s, i> C R for some i C I}
The "denotation of x in R", written "Den(R, x)", is defined as follows:
- Den(R, x) = {o C O : <o, x> C Den(R)}.
In other words:
- Den(R, x) = {o C O : <o, x, i> C R for some i C I}.
Equivalent expressions for this concept are recorded in Definition 12.
Definition 12 If R c OxSxI, and x C S, then the following are identical subsets of O: D12a. ROS.x D12b. DenR.x D12c. DenR|x D12d. DenR(, x) D12e. Den(R, x) D12f. Den(R).x D12g. {o C O : <o, x> C Den(R)} D12h. {o C O : <o, x, i> C R for some i C I}
Signs are "equiferent" if they refer to all and only the same objects, that is, if they have exactly the same denotations. In other language for the same relation, signs are said to be "denotatively equivalent" or "referentially equivalent", but it is probably best to check whether the extension of this concept over the syntactic domain is really a genuine equivalence relation before jumping to the conclusions that are implied by these latter terms.
To define the "equiference" of signs in terms of their denotations, one says that "x is equiferent to y under R", and writes "x =R y", to mean that Den(R, x) = Den(R, y). Taken in extension, this notion of a relation between signs induces an "equiference relation" on the syntactic domain.
For each sign relation R, this yields a binary relation Der(R) c SxI that is defined as follows:
- Der(R) = DerR = {<x, y> C SxI : Den(R, x) = Den(R, y)}.
These definitions and notations are recorded in the following display.
Definition 13 If R c OxSxI, then the following are identical subsets of SxI: D13a. DerR D13b. Der(R) D13c. {<x,y> C SxI : DenR|x = DenR|y} D13d. {<x,y> C SxI : Den(R, x) = Den(R, y)}
The relation Der(R) is defined and the notation "x =R y" is meaningful in every situation where Den(-,-) makes sense, but it remains to check whether this relation enjoys the properties of an equivalence relation.
- Reflexive property. Is it true that x =R x for every x C S = I? By definition, x =R x if and only if Den(R, x) = Den(R, x). Thus, the reflexive property holds in any setting where the denotations Den(R, x) are defined for all signs x in the syntactic domain of R.
- Symmetric property. Does x =R y => y =R x for all x, y C S? In effect, does Den(R, x) = Den(R, y) imply Den(R, y) = Den(R, x) for all signs x and y in the syntactic domain S? Yes, so long as the sets Den(R, x) and Den(R, y) are well-defined, a fact which is already being assumed.
- Transitive property. Does x =R y & y =R z => x =R z for all x, y, z C S? To belabor the point, does Den(R, x) = Den(R, y) and Den(R, y) = Den(R, z) imply Den(R, x) = Den(R, z) for all x, y, z in S? Yes, again, under the stated conditions.
It should be clear at this point that any question about the equiference of signs reduces to a question about the equality of sets, specifically, the sets that are indexed by these signs. As a result, so long as these sets are well-defined, the issue of whether equiference relations induce equivalence relations on their syntactic domains is almost as trivial as it initially appears.
Taken in its set-theoretic extension, a relation of equiference induces a "denotative equivalence relation" (DER) on its syntactic domain S = I. This leads to the formation of "denotative equivalence classes" (DEC's), "denotative partitions" (DEP's), and "denotative equations" (DEQ's) on the syntactic domain. But what does it mean for signs to be equiferent?
Notice that this is not the same thing as being "semiotically equivalent", in the sense of belonging to a single "semiotic equivalence class" (SEC), falling into the same part of a "semiotic partition" (SEP), or having a "semiotic equation" (SEQ) between them. It is only when very felicitous conditions obtain, establishing a concord between the denotative and the connotative components of a sign relation, that these two ideas coalesce.
In general, there is no necessity that the equiference of signs, that is, their denotational equivalence or their referential equivalence, induces the same equivalence relation on the syntactic domain as that defined by their semiotic equivalence, even though this state of accord seems like an especially desirable situation. This makes it necessary to find a distinctive nomenclature for these structures, for which I adopt the term "denotative equivalence relations" (DER's). In their train they bring the allied structures of "denotative equivalence classes" (DEC's) and "denotative partitions" (DEP's), while the corresponding statements of "denotative equations" (DEQ's) are expressible in the form "x =R y".
The uses of the equal sign for denoting equations or equivalences are recalled and extended in the following ways:
1. If E is an arbitrary equivalence relation,
then the equation "x =E y" means that <x, y> C E.
2. If R is a sign relation such that RSI is a SER on S = I,
then the semiotic equation "x =R y" means that <x, y> C RSI.
3. If R is a sign relation such that F is its DER on S = I,
then the denotative equation "x =R y" means that <x, y> C F,
in other words, that Den(R, x) = Den(R, y).
The uses of square brackets for denoting equivalence classes are recalled and extended in the following ways:
1. If E is an arbitrary equivalence relation,
then "[x]E" denotes the equivalence class of x under E.
2. If R is a sign relation such that Con(R) is a SER on S = I,
then "[x]R" denotes the SEC of x under Con(R).
3. If R is a sign relation such that Der(R) is a DER on S = I,
then "[x]R" denotes the DEC of x under Der(R).
By applying the form of Fact 1 to the special case where X = Den(R, x) and Y = Den(R, y), one obtains the following facts.
Fact 2.1 If R c OxSxI, then the following are identical subsets of SxI: F2.1a. DerR :D13a :: F2.1b. Der(R) :D13b :: F2.1c. {<x, y> C SxI : Den(R, x) = Den(R, y) } :D13c :R9a :: F2.1d. {<x, y> C SxI : {Den(R, x)} = {Den(R, y)} } :R9b :: F2.1e. {<x, y> C SxI : for all o C O {Den(R, x)}(o) = {Den(R, y)}(o) } :R9c :: F2.1f. {<x, y> C SxI : Conj(o C O) {Den(R, x)}(o) = {Den(R, y)}(o) } :R9d :: F2.1g. {<x, y> C SxI : Conj(o C O) (( {Den(R, x)}(o) , {Den(R, y)}(o) )) } :R9e :: F2.1h. {<x, y> C SxI : Conj(o C O) (( {Den(R, x)} , {Den(R, y)} ))$(o) } :R9f :D12e :: F2.1i. {<x, y> C SxI : Conj(o C O) (( {ROS.x} , {ROS.y} ))$(o) } :D12a
Fact 2.2 If R c OxSxI, then the following are equivalent: F2.2a. DerR = {<x, y> C SxI : Conj(o C O) {Den(R, x)}(o) = {Den(R, y)}(o) } :R11a :: F2.2b. {DerR} = { {<x, y> C SxI : Conj(o C O) {Den(R, x)}(o) = {Den(R, y)}(o) } } :R11b :: F2.2c. {DerR} c SxIxB : {DerR} = {<x, y, v> C SxIxB : v = [ Conj(o C O) {Den(R, x)}(o) = {Den(R, y)}(o) ] } :R11c :: F2.2d. {DerR} = {<x, y, v> C SxIxB : v = Conj(o C O) [ {Den(R, x)}(o) = {Den(R, y)}(o) ] } :Log F2.2e. {DerR} = {<x, y, v> C SxIxB : v = Conj(o C O) (( {Den(R, x)}(o), {Den(R, y)}(o) )) } :Log F2.2f. {DerR} = {<x, y, v> C SxIxB : v = Conj(o C O) (( {Den(R, x)}, {Den(R, y)} ))$(o) } :$
Fact 2.3 If R c OxSxI, then the following are equivalent: F2.3a. DerR = {<x, y> C SxI : Conj(o C O) {Den(R, x)}(o) = {Den(R, y)}(o) } :R11a :: F2.3b. {DerR} : SxI -> B : {DerR}(x, y) = [ Conj(o C O) {Den(R, x)}(o) = {Den(R, y)}(o) ] :R11d :: F2.3c. {DerR}(x, y) = Conj(o C O) [ {Den(R, x)}(o) = {Den(R, y)}(o) ] :Log :: F2.3d. {DerR}(x, y) = Conj(o C O) [ {DenR}(o, x) = {DenR}(o, y) ] :Def :: F2.3e. {DerR}(x, y) = Conj(o C O) (( {DenR}(o, x), {DenR}(o, y) )) :Log :D10b :: F2.3f. {DerR}(x, y) = Conj(o C O) (( {ROS}(o, x), {ROS}(o, y) )) :D10a
1.3.12.3. Digression on Derived Relations
A better understanding of derived equivalence relations (DER's) can be achieved by placing their constructions within a more general context, and thus comparing the associated type of derivation operation, namely, the one that takes a triadic relation R into a dyadic relation Der(R), with other types of operations on triadic relations. The proper setting would permit a comparative study of all their constructions from a basic set of projections and a full array of compositions on dyadic relations.
To that end, let the derivation Der(R) be expressed in the following way:
- {DerR}(x, y) = Conj(o C O) (( {RSO}(x, o) , {ROS}(o, y) )).
From this abstract a form of composition, temporarily notated as "P#Q", where P c XxM and Q c MxY are otherwise arbitrary dyadic relations, and where P#Q c XxY is defined as follows:
- {P#Q}(x, y) = Conj(m C M) (( {P}(x, m) , {Q}(m, y) )).
Compare this with the usual form of composition, typically notated as "P.Q" and defined as follows:
- {P.Q}(x, y) = Disj(m C M) ( {P}(x, m) . {Q}(m, y) ).