Abstract. Leibniz's intuition that necessity corresponds to truth in all possible worlds enabled Kripke to define a rigorous model theory for several axiomatizations of modal logic. Unfortunately, Kripke's model structures lead to a combinatorial explosion when they are extended to all the varieties of modality and intentionality that people routinely use in ordinary language. As an alternative, any semantics based on possible worlds can be replaced by a simpler and more easily generalizable approach based on Dunn's semantics of laws and facts and a theory of contexts based on the ideas of Peirce and McCarthy. To demonstrate consistency, this article defines a family of nested graph models, which can be specialized to a wide variety of model structures, including Kripke's models, situation semantics, temporal models, and many variations of them. An important advantage of nested graph models is the option of partitioning the reasoning tasks into separate metalevel stages, each of which can be axiomatized in classical first-order logic. At each stage, all inferences can be carried out with well-understood theorem provers for FOL or some subset of FOL. To prove that nothing more than FOL is required, Section 6 shows how any nested graph model with a finite nesting depth can be flattened to a conventional Tarski-style model. For most purposes, however, the nested models are computationally more tractable and intuitively more understandable.
An earlier version of this article was presented at the Φlog Conference in Denmark in May 2002. This version has been published in Knowledge Contributors, edited by V. F. Hendricks, K. F. Jørgensen, and S. A. Pedersen, Kluwer Academic Publishers, Dordrecht, 2003, pp. 145-184.
Possible worlds have been the most popular semantic foundation for modal logic since Kripke (1963) adopted them for his version of model structures. Lewis (1986), for example, argued that "We ought to believe in other possible worlds and individuals because systematic philosophy goes more smoothly in many ways if we do." Yet computer implementations of modal reasoning replace possible worlds with "ersatz worlds" consisting of collections of propositions that more closely resemble Hintikka's (1963) model sets. By dividing the model sets into necessary laws and contingent facts, Dunn (1973) defined a conservative refinement of Kripke's semantics that eliminated the need for a "realist" view of possible worlds. Instead of assuming Kripke's accessibility relation as an unexplained primitive, Dunn derived it from the selection of laws and facts.
Since Dunn's semantics is logically equivalent to Kripke's for conventional modalities, most logicians ignored it in favor of Kripke's. For multimodal reasoning, however, Dunn's approach simplifies the reasoning process by separating the metalevel reasoning about laws and facts from the object-level reasoning in ordinary first-order logic. For each modality, Kripke semantics supports two operators such as for necessity and ◊ for possibility. For temporal logic, the same two operators are interpreted as always and sometimes. For deontic logic, they are reinterpreted as obligation and permission. That approach cannot represent, much less reason about a sentence that mixes all three modalities, such as You are never obligated to do anything impossible. The limitation to just one modality is what Scott (1970) considered "one of the biggest mistakes of all in modal logic":
The only way to have any philosophically significant results in deontic or epistemic logic is to combine these operators with: Tense operators (otherwise how can you formulate principles of change?); the logical operators (otherwise how can you compare the relative with the absolute?); the operators like historical or physical necessity (otherwise how can you relate the agent to his environment?); and so on and so on. (p. 143)These philosophical considerations are even more pressing for linguistics, which must relate different modalities in the same sentence. Dunn's semantics facilitates multimodal interactions by allowing each modal operator or each verb that implies a modal operator to have its own associated laws. At the metalevel, laws can be distinguished from facts and from the laws associated with different verbs or operators. At the object level, however, the reasoning process can use first-order logic without distinguishing laws from facts or the laws of one modality from the laws of another.
To take advantage of Dunn's semantics, the metalevel reasoning should be performed in a separate context from the object-level reasoning. This separation requires a formal theory of contexts that can distinguish different metalevels. But as Rich Thomason (2001) observed, "The theory of context is important and problematic — problematic because the intuitions are confused, because disparate disciplines are involved, and because the chronic problem in cognitive science of how to arrive at a productive relation between formalizations and applications applies with particular force to this area." The version of contexts adopted for this article is based on a representation that Peirce introduced for existential graphs (EGs) and Sowa (1984) adopted as a foundation for conceptual graphs (CGs). That approach is further elaborated along the lines suggested by McCarthy (1993) and developed by Sowa (1995, 2000).
Sections 2, 3, and 4 of this article summarize Dunn's semantics of laws and facts, a theory of contexts based on the work of Peirce and McCarthy, and Tarski's hierarchy of metalevels. Then Section 5 introduces nested graph models (NGMs) as a general formalism for a family of models that can be specialized for various theories of modality and intentionality. Section 6 shows how any NGM with a finite depth of nesting can be flattened to a Tarski-style model consisting of nothing but a set D of individuals and a set R of relations over D. Although the process of flattening shows that modalities can be represented in first-order logic, the flattening comes at the expense of adding extra arguments to each relation to indicate every context in which it is nested. Finally, Section 7 shows how Peirce's semeiotic, Dunn's semantics, Tarski's metalevels, and nested graph models provide a powerful combination of tools for analyzing and formalizing semantic relationships.
Philosophers since Aristotle have recognized that modality is related to laws; Dunn's innovation made the relationships explicit. Instead of Kripke's primitive accessibility relation between worlds, Dunn (1973) replaced each possible world with two sets of propositions called laws and facts. For every Kripke world w, Dunn assumed an ordered pair (M,L), where M is a Hintikka-style model set called the facts of w and L is a subset of M called the laws of w. For this article, the following conventions are assumed:
To show how the accessibility relation from one world to another can be derived from the choice of laws, let (M_{1},L_{1}) be a pair of facts and laws that describe a possible world w_{1}, and let the pair (M_{2},L_{2}) describe a world w_{2}. Dunn defined accessibility from the world w_{1} to the world w_{2} to mean that the laws L_{1} are a subset of the facts in M_{2}:
R(w_{1},w_{2}) ≡ L_{1}⊂M_{2}.According to this definition, the laws of the first world w_{1} remain true in the second world w_{2}, but they may be demoted from the status of laws to merely contingent facts. In Kripke's semantics, possibility ◊p means that p is true of some world w accessible from the real world w_{0}:
◊p ≡ (∃w:World)(R(w_{0},w) ∧ w|=p).By substituting the laws and facts for the possible worlds, Dunn restated the definitions of possibility and necessity:
◊p ≡ (∃M:ModelSet) (L_{0} ⊂M ∧ p∈M).Now possibility ◊p means that there exists a model set M that contains the laws of the real world L_{0} and p is a fact in M. Since M is consistent and it contains the laws L_{0}, possibility implies that p must be consistent with the laws of the real world. By the same substitutions, the definition of necessity becomes
p ≡ (∀M:ModelSet)(L_{0} ⊂M ⊃ p∈M).Necessity p means that every model set M that contains the laws of the real world also contains p.
Dunn performed the same substitutions in Kripke's constraints on the accessibility relation. The result is a restatement of the constraints in terms of the laws and facts:
Dunn's theory is a conservative refinement of Kripke's theory, since any Kripke model structure (K,R,Φ) can be converted to one of Dunn's model structures in two steps:
M = {p | Φ(p,w) = true}.
L = {p | (∀u:World)(R(w,u) ⊃ Φ(p,u) = true}.
R(u,v) ≡ (∀p:Proposition)(p∈L_{u} ⊃ p∈M_{v}).
Every axiom and theorem of Kripke's theory remains true in Dunn's version, but Dunn's theory makes the reasons for modality available for further inferences. For theories of intentionality, Dunn's approach can relate the laws to the goals and purposes of some agent, who in effect legislates which propositions are to be considered laws. This approach formalizes an informal suggestion by Hughes and Cresswell (1968): "a world, w_{2}, is accessible to a world, w_{1}, if w_{2} is conceivable by someone living in w_{1}." In Dunn's terms, the laws that determine what is necessary in the world w_{1} are the propositions that are not conceivably false for someone living in w_{1}.
In first-order logic, laws and facts are propositions, and there is no special mark that distinguishes a law from a fact. To distinguish them, a context mechanism is necessary to separate first-order reasoning with the propositions from metalevel reasoning about the propositions and about the distinctions between laws and facts. Peirce (1880, 1885) invented the algebraic notation for predicate calculus, which with a change of symbols by Peano became today's most widely used notation for logic. A dozen years later, Peirce developed a graphical notation for logic that more clearly distinguishes contexts. Figure 1 shows his graph notation for delimiting the context of a proposition. In explaining that graph, Peirce (1898) said "When we wish to assert something about a proposition without asserting the proposition itself, we will enclose it in a lightly drawn oval." The line attached to the oval links it to a relation that makes a metalevel assertion about the nested proposition.
Figure 1: One of Peirce's graphs for talking about a proposition
The oval serves the syntactic function of grouping related information in a package. Besides notation, Peirce developed a theory of the semantics and pragmatics of contexts and the rules of inference for importing and exporting information into and out of contexts. To support first-order logic, the only metalevel relation required is negation. By combining negation with the existential-conjunctive subset of logic, Peirce developed his existential graphs (EGs), which are based on three logical operators and an open-ended number of relations:
Figure 2: EG and CG for "If a farmer owns a donkey, then he beats it."
To illustrate the use of negative contexts for representing FOL, Figure 2 shows an existential graph and a conceptual graph for the sentence If a farmer owns a donkey, then he beats it. This sentence is one of a series of examples used by medieval logicians to illustrate issues in mapping language to logic. The EG on the left has two ovals with no attached lines; by default, they represent negations. It also has two lines of identity, represented as linked bars: one line, which connects farmer to the left side of owns and beats, represents an existentially quantified variable (∃x); the other line, which connects donkey to the right side of owns and beats represents another variable (∃y).
When the EG of Figure 2 is translated to predicate calculus, farmer and donkey map to monadic predicates; owns and beats map to dyadic predicates. If a relation is attached to more than one line of identity, the lines are ordered from left to right by their point of attachment to the name of the relation. With the implicit conjunctions represented by the ∧ symbol, the result is an untyped formula:
~(∃x)(∃y)(farmer(x) ∧ donkey(y) ∧ owns(x,y) ∧ ~beats(x,y)).
A nest of two ovals, as in Figure 2, is what Peirce called a scroll. It represents implication, since ~(p∧~q) is equivalent to p⊃q. Using the ⊃ symbol, the formula may be rewritten
(∀x)(∀y)((farmer(x) ∧ donkey(y) ∧ owns(x,y)) ⊃ beats(x,y)).The CG on the right of Figure 2 may be considered a typed or sorted version of the EG. The boxes [Farmer] and [Donkey] represent a notation for sorted quantification (∃x:Farmer) and (∃y:Donkey). The ovals (Owns) and (Beats) represent relations, whose attached arcs link to the boxes that represent the arguments. The large boxes with the symbol ¬ in front correspond to Peirce's ovals that represent negation. As a result, the CG corresponds to the following formula, which uses sorted or restricted quantifiers:
(∀x:Farmer)(∀y:Donkey)(owns(x,y) ⊃ beats(x,y)).The algebraic formulas with the ⊃ symbol illustrate a peculiar feature of predicate calculus: in order to keep the variables x and y within the scope of the quantifiers, the existential quantifiers for the phrases a farmer and a donkey must be moved to the front of the formula and be translated to universal quantifiers. This puzzling feature of logic has been a matter of debate among linguists and logicians since the middle ages.
The nested graph models defined in Section 5 are based on the CG formalism, but with one restriction: every graph must be wholly contained within a single context. The relation (Beats) in Figure 2 could not be linked to concept boxes outside its own context. To support that restriction, Figure 3 shows an equivalent CG in which concept boxes in different contexts are connected by dotted lines called coreference links, which indicate that the two concepts refer to exactly the same individual. A set of boxes connected by coreference links corresponds to what Peirce called a line of identity.
Figure 3: A conceptual graph with coreference links
The symbol , which is a synonym for the type Entity, represents the most general type, which is true of everything. Therefore, concepts of the form [] correspond an unrestricted quantifier, such as (∃z). The dotted lines correspond to equations of the form x=z. Therefore, Figure 3 is equivalent to the following formula:
(∀x:Farmer)(∀y:Donkey)(owns(x,y) ⊃ (∃z)(∃w)(beats(z,w) ∧ x=z ∧ y=w)).By the rules of inference for predicate calculus, this formula is provably equivalent to the previous one.
Besides attaching a relation to an oval, Peirce also used colors or tinctures to distinguish contexts other than negation. Figure 4 shows one of his examples with red (or shading) to indicate possibility. The graph contains four ovals: the outer two form a scroll for if-then; the inner two represent possibility (shading) and impossibility (shading inside a negation). The outer oval may be read If there exist a person, a horse, and water; the next oval may be read then it is possible for the person to lead the horse to the water and not possible for the person to make the horse drink the water.
Figure 4: EG for "You can lead a horse to water, but you can't make him drink."
The notation "—leads—to—" represents a triad or triadic relation leadsTo(x,y,z), and "—makes—drink—" represents makesDrink(x,y,z). In the algebraic notation with the symbol ◊ for possibility, Figure 4 maps to the following formula:
~(∃x)(∃y)(∃z)(person(x) ∧ horse(y) ∧ water(z) ∧ ~(◊leadsTo(x,y,z) ∧ ~◊makesDrink(x,y,z))).With the symbol ⊃ for implication, this formula becomes
(∀x)(∀y)(∀z)((person(x) ∧ horse(y) ∧ water(z)) ⊃ (◊leadsTo(x,y,z) ∧ ~◊makesDrink(x,y,z))).This version may be read For all x, y, and z, if x is a person, y is a horse, and z is water, then it is possible for x to lead y to z, and not possible for x to make y drink z. These readings, although logically explicit, are not as succinct as the proverb You can lead a horse to water, but you can't make him drink.
Discourse representation theory. The logician Hans Kamp once spent a summer translating English sentences from a scientific article to predicate calculus. During the course of his work, he was troubled by the same kinds of irregularities that puzzled the medieval logicians. In order to simplify the mapping from language to logic, Kamp (1981) developed discourse representation structures (DRSs) with an explicit notation for contexts. In terms of those structures, Kamp defined the rules of discourse representation theory for mapping quantifiers, determiners, and pronouns from language to logic (Kamp & Reyle 1993).
Although Kamp had not been aware of Peirce's existential graphs, his DRSs are structurally equivalent to Peirce's EGs. The diagram on the right of Figure 5 is a DRS for the donkey sentence, If there exist a farmer x and a donkey y and x owns y, then x beats y. The two boxes connected by an arrow represent an implication where the antecedent includes the consequent within its scope.
Figure 5: EG and DRS for "If a farmer owns a donkey, then he beats it."
The DRS and EG notations look quite different, but they are exactly isomorphic: they have the same primitives, the same scoping rules for variables or lines of identity, and the same translation to predicate calculus. Therefore, the EG and DRS notations map to the same formula:
~(∃x)(∃y)(farmer(x) ∧ donkey(y) ∧ owns(x,y) ∧ ~beats(x,y)).Peirce's motivation for the EG contexts was to simplify the logical structures and rules of inference. Kamp's motivation for the DRS contexts was to simplify the mapping from language to logic. Remarkably, they converged on isomorphic representations. Therefore, Peirce's rules of inference and Kamp's discourse rules apply equally well to contexts in the EG, CG, or DRS notations. For notations with a different structure, such as predicate calculus, those rules cannot be applied without major modifications.
McCarthy's contexts. In his "Notes on Formalizing Context," McCarthy (1993) introduced the predicate ist(C,p), which may be read "the proposition p is true in context C." For clarity, it will be spelled out in the form isTrueIn(C, p). As illustrations, McCarthy gave the following examples:
One of McCarthy's reasons for developing a theory of context was his uneasiness with the proliferation of new logics for every kind of modal, temporal, epistemic, and nonmonotonic reasoning. The ever-growing number of modes presented in AI journals and conferences is a throwback to the scholastic logicians who went beyond Aristotle's two modes necessary and possible to permissible, obligatory, doubtful, clear, generally known, heretical, said by the ancients, or written in Holy Scriptures. The medieval logicians spent so much time talking about modes that they were nicknamed the modistae. The modern logicians have axiomatized their modes and developed semantic models to support them, but each theory includes only one or two of the many modes. McCarthy (1977) observed,
For AI purposes, we would need all the above modal operators in the same system. This would make the semantic discussion of the resulting modal logic extremely complex.Instead of an open-ended number of modes, McCarthy hoped to develop a simple, but universal mechanism that would replace modal logic with first-order logic supplemented with metalanguage about contexts. That approach can be adapted to Dunn's semantics by adding another predicate isLawOf(C,p), which states that proposition p is a law of context C. Then Dunn's laws and facts can be defined in terms of McCarthy's contexts:
M = {p | isTrueIn(C,p)}.
L = {p | isLawOf(C,p)}.
The semantics for multiple levels of nested contexts is based on the method of stratified metalevels by Tarski (1933). Each context in a nest is treated as a metalevel with respect to every context nested within it. The propositions in some context that has no nested levels beneath it may be considered as an object language L_{0}, which refers to entities in some universe of discourse D. The metalanguage L_{1} refers to the symbols of L_{0} and their relationships to D. Tarski showed that the metalanguage is still first order, but its universe of discourse is enlarged from D to L_{0}∪D. The metametalanguage L_{2} is also first order, but its universe of discourse is L_{1}∪L_{0}∪D. To avoid paradoxes, Tarski insisted that no metalanguage L_{n} could refer to its own symbols, but it could refer to the symbols or individuals in the domain of any language L_{i} where 0≤i<n.
In short, metalevel reasoning is first-order reasoning about the way statements may be sorted into contexts. After the sorting has been done, reasoning with the propositions in a context can be handled by the usual FOL rules. At every level of the Tarski hierarchy of metalanguages, the reasoning process is governed by first-order rules. But first-order reasoning in language L_{n} has the effect of higher-order or modal reasoning for every language below n. At every level n, the model theory that justifies the reasoning in L_{n} is a conventional first-order Tarskian semantics, since the nature of the objects in the domain D_{n} is irrelevant to the rules that apply to L_{n}.
Example. To illustrate the interplay of the metalevel and object-level inferences, consider the following statement, which includes direct quotation, indirect quotation, indexical pronouns, and metalanguage about belief:
Joe said "I don't believe in astrology, but everybody knows that it works even if you don't believe in it."This statement could be translated word-for-word to a conceptual graph in which the indexicals are represented by the symbols #I, #they, #it, and #you. Then the resolution of the indexicals could be performed by metalevel transformations of the graph. Those transformations could also be written in stylized English:
Joe said [#I don't believe [in astrology] but everybody knows [[#it works] even if #you don't believe [in #it]]].
Joe said [Joe doesn't believe [astrology works] but every person x knows [[astrology works] even if x doesn't believe [astrology works] ]].
Joe believes [Joe doesn't believe [astrology works] and every person x knows [astrology works] ].
Joe believes [Joe doesn't believe [astrology works] and Joe knows [astrology works] ].
Joe believes [Joe doesn't believe [astrology works] and Joe believes [astrology works] ].This statement shows that Joe believes a contradiction of the form (~p ∧ p).
In the process of reasoning about Joe's beliefs, the context [astrology works] is treated as an encapsulated object, whose internal structure is ignored. When the levels interact, however, further axioms are necessary to relate them. Like the iterated modalities ◊◊p and ◊p, iterated beliefs occur in statements like Joe believes that Joe doesn't believe that astrology works. One reasonable axiom is that if an agent a believes that a believes p, then a believes p:
(∀a:Agent)(∀p:Proposition)(believe(a,believe(a,p)) ⊃ believe(a,p)).This axiom enables two levels of nested contexts to be collapsed into one. The converse, however, is less likely: many people act as if they believe propositions that they are not willing to admit. Joe, for example, might read the astrology column in the daily newspaper and follow its advice. His actions could be considered evidence that he believes in astrology. Yet when asked, Joe might continue to insist that he doesn't believe in astrology.
To prove that a syntactic notation for contexts is consistent, it is necessary to define a model-theoretic semantics for it. But to show that the model captures the intended interpretation, it is necessary to show how it represents the entities of interest in the application domain. For consistency, this section defines model structures called nested graph models (NGMs), which can serve as the denotation of logical expressions that contain nested contexts. Nested graph models are general enough to represent a variety of other model structures, including Tarski-style "flat" models, the possible worlds of Kripke and Montague, and other approaches discussed in this article. The mapping from those model structures to NGMs shows that NGMs are at least as suitable for capturing the intented interpretation. Dunn's semantics allows NGMs to do more: the option of representing metalevel information in any context enables statements in one context to talk about the laws and facts of nested contexts and about the intentions of agents who may have legislated the laws.
To illustrate the formal definitions, Figure 6 shows an informal example of an NGM. Every box or rectangle in Figure 6 represents an individual entity in the domain of discourse, and every circle represents a property (monadic predicate) or a relation (predicate or relation with two or more arguments) that is true of the individual(s) to which it is linked. The arrows on the arcs are synonyms for the integers used to label the arcs: for dyadic relations, an arrow pointing toward the circle represents the integer 1, and an arrow pointing away from the circle represents 2; relations with more than two arcs must supplement the arrows with integers. Some boxes contain nested graphs: they represent individuals that have parts or aspects, which are individual entities represented by the boxes in the nested graphs.
Figure 6: A nested graph model (NGM)
The four dotted lines in Figure 6 are coreference links, which represent three lines lines of identity. Two lines of identity contain only two boxes, which are the endpoints of a single coreference link. The third line of identity contains three boxes, which are connected by two coreference links. In general, a line of identity with n boxes may be shown by n−1 coreference links, each of which corresponds to an equation that asserts the equality of the referents of the boxes it connects. A coreference link may connect two boxes of the same NGM, or it may connect a box of an NGM G to a box of another NGM that is nested directly or indirectly in G. But a coreference link may never connect a box of an NGM G to a box of another NGM H, where neither G nor H is nested in the other. As Figure 6 illustrates, coreference links may go from an outer NGM to a more deeply nested NGM, but they may not connect boxes in two independently nested NGMs.
For convenience in relating the formalism to diagrams such as Figure 6, the components of a nested graph model (NGM) are called arcs, boxes, circles, labels, and lines of identity. Formally, however, an NGM is defined as a 5-tuple G=(A,B,C,L,I), consisting of five abstract sets whose implications are completely determined by the following definitions:
An NGM may contain any number of levels of nested NGMs, but no NGM may be nested within itself, either directly or indirectly. If an NGM has an infinite nesting depth, it could be isomorphic to another NGM nested in itself; but the nested copy is considered to be distinct from the outer NGM.
Mapping other models to NGMs. Nested graph models are set-theoretical structures that can serve as models for a wide variety of logical theories. They can be specialized in various ways to represent other model structures. Tarski-style models require no nesting, Kripke-style models require one level of nesting, and models for multiple modalities, which will be discussed in Sections 6 and 7, require deeper nesting.
For finite models, these steps can be translated to a computer program that constructs G from M and Ψ from Φ. For infinite models, they should be considered a specification rather than a construction. By this specification, D and R are subsets of L. Therefore, there would always be enough labels for the boxes and circles, even if D and R happen to be uncoutably infinite.
Figure 7: An NGM that represents a Kripke model structure
The box labeled w_{0} represents the real world, and the boxes labeled w_{1} to w_{4} represent worlds that are accessible from the real world. The circles labeled R represent instances of the accessibility relation, and the arrows show which worlds are accessible from any other. Formally, the following construction defines an isomorphism from any Kripke model structure M to an NGM G=(A,B,C,L,I):
Figure 8: An NGM with counterparts in multiple worlds
At the top of Figure 8, two individuals, represented by boxes labeled a and b, are connected by coreference links to some of the boxes of the nested graphs. The box labeled w_{0} represents a Tarski-style model for the real world, in which two individuals are marked as identical to a and b by coreference links. The box labeled w_{1} represents some possible world in which two individuals are marked as counterparts for a and b by coreference links, and w_{2} represents another possible world in which only one individual has a counterpart for b. The two coreference links attached to box a represent a line of identity that contains three boxes, and the three coreference links attached to box b represent a line of identity that contains four boxes.
An NGM for quantified modal logic, G=(A,B,C,L,I), can be constructed by starting with the first three steps for an NGM for a Kripke-style model and continuing with the following:
i = {b} ∪ {b_{w} | w is a world that has a counterpart of x}.
As an example, Figure 8 might represent an encounter between a mouse a and a cat b. At time t = 0, the snapshot w_{0} represents a model of an event in which the cat b catches the mouse a. In w_{1}, b eats a. In w_{2}, b is licking his lips, but a no longer exists. The cat b has a counterpart in all three snapshots, but the mouse a exists in just the first 2. The boxes in the snapshots that have no links to boxes outside their snapshot represent entities such as actions or aspects of actions, which exist only for the duration of one snapshot.
Figure 8 illustrates a version of temporal logic in which the snapshots are linearly ordered. An NGM could also represent branching time, in which the snapshots for the future lie on multiple branches, each of which represents a different option that the cat or the mouse might choose. Branching models are especially useful for game-playing programs that analyze options many steps into the future.
(∀x)P(x) ≡ (∀x)P(x).In terms of Kripke-style models or NGMs, the Barcan constraint implies that all worlds accessible from a given world must have exactly the same individuals. To enforce that constraint, a Barcan NGM can be defined as a G=(A,B,C,L,I) for quantified modal logic whose boxes B are partitioned in an equivalence class E with the following properties:
i = {b} ∪ {d | d is a box of some NGM nested in a box of B_{2} for which f(d)=b}.
In short, each equivalence class contains a set of individual boxes B_{1} and a set of world boxes B_{2}. Each world box contains a nested NGM whose boxes are in a one-to-one correspondence with the individual boxes of B_{1}. Each individual box has a coreference link to the corresponding box of each NGM nested in a world box of B_{2}.
This discussion shows how various Kripke-style models can be converted to isomorphic NGMs. That conversion enables different kinds of model structures to be compared within a common framework. The next two sections of this paper show that NGMs combined with Dunn's semantics can represent a wider range of semantic structures and methods of reasoning.
As the examples in Section 5 show, nested graph models can represent the equivalent of Kripke models for a wide range of logics. But Kripke models, which use only a single level of nesting, do not take full advantage of the representational options of NGMs. The possibility of multiple levels of nesting makes NGMs significantly more expressive than Kripke's model structures, but questions arise about what they actually express. In criticizing Kripke's models, Quine (1972) noted that models can be used to prove that certain axioms are consistent, but they don't explain the intended meaning of those axioms:
The notion of possible world did indeed contribute to the semantics of modal logic, and it behooves us to recognize the nature of its contribution: it led to Kripke's precocious and significant theory of models of modal logic. Models afford consistency proofs; also they have heuristic value; but they do not constitute explication. Models, however clear they be in themselves, may leave us at a loss for the primary, intended interpretation.Quine's criticisms apply with equal or greater force to NGMs. Although the metaphor of possible worlds raises serious ontological questions, it lends some aura of meaningfulness to the entities that make up the models. As purely set theoretical constructions, NGMs dispense with the dubious ontology of possible worlds, but their networks of boxes and circles have even less intuitive meaning.
To illustrate the issues, Figure 9 shows a conceptual graph with two levels of nesting to represent the sentence Tom believes that Mary wants to marry a sailor. The type labels of the contexts indicate how the nested CGs are interpreted: what Tom believes is a proposition stated by the CG nested in the context of type Proposition; what Mary wants is a situation described by the proposition stated by the CG nested in the context of type Situation. Relations of type (Expr) show that Tom and Mary are the experiencers of states of believing or wanting, and relations of type (Thme) show that the themes of those states are propositions or situations.
Figure 9: A conceptual graph with two nested contexts
When a CG is in the outermost context or when it is nested in a concept of type Proposition, it states a proposition. When a CG is nested inside a concept of type Situation, the stated proposition describes the situation. When a context is translated to predicate calculus, the result depends on the type label of the context. In the following translation, the first line represents the subgraph outside the nested contexts, the second line represents the subgraph for Tom's belief, and the third line represents the subgraph for Mary's desire:
(∃a:Person)(∃b:Believe)(name(a,'Tom') ∧ expr(a,b) ∧ thme(b,If a CG is outside any context, the default translation treats it as a statement of a proposition. Therefore, the part of Figure 9 inside the context of type Proposition is translated in the same way as the part outside that context. For the part nested inside the context of type Situation, the description predicate dscr relates the situation e to the statement of the proposition.
(∃c:Person)(∃d:Want)(∃e:Situation)(name(c,'Mary') ∧ expr(d,c) ∧ thme(d,e) ∧ dscr(e,
(∃f:Marry)(∃g:Sailor)(agnt(f,c) ∧ thme(f,g))))))
As the translation to predicate calculus illustrates, the nested CG contexts map to formulas that are nested as arguments of predicates, such as thme or dscr. Such graphs or formulas can be treated as examples of Tarski's stratified metalevels, in which a proposition expressed in the outer context can make a statement about a proposition in the nested context, which may in turn make a statement about another proposition nested even more deeply. A nested graph model for such propositions would have the same kind of nested structure.
To show how the denotation of the CG in Figure 9 (or its translation to predicate calculus) is evaluated, consider the NGM in Figure 10, which represents some aspect of the world, including some of Tom's beliefs. The outermost context of Figure 10 represents some information known to an outside observer who uttered the original sentence Tom believes that Mary wants to marry a sailor. The context labeled #4 contains some of Tom's beliefs, including his mistaken belief that person #5 is named Jane, even though #5 is coreferent with person #3, who is known to the outside observer as Mary. The evaluation of Figure 9 in terms of Figure 10 is based on the method of outside-in evaluation, which Peirce (1909) called endoporeutic.
Figure 10: An NGM for which Figure 9 has denotation true
Syntactically, Figure 10 is a well formed CG, but it is limited to a more primitive subset of features than Figure 9. Before the denotation of Figure 9 can be evaluated in terms of Figure 10, each concept node of the CG must be replaced by a subgraph that uses the same features. The concept [Person: Tom], for example, may be considered an abbreviation for a CG that uses only the primitive features:
(Person)—[∃]→(Name)→["Tom"]—(Word).This graph says that there exists something [∃] for which the monadic predicate (Person) is true, and it has as name the character string "Tom", for which the monadic predicate (Word) is true. This graph has denotation true in terms of Figure 10 because every part of it is either identical to or implied by a matching part of Figure 10; the only part that is not identical is the existential quantifier ∃, which is implied by the constant #1. In general, a conceptual graph g with no nested contexts is true in terms of a flat model m if and only if there exists a projection of g into m (Sowa 1984), where a projection is defined as a mapping from g into some subgraph of m for which every node of g is either identical to or a generalization of the corresponding node of m.
For nested CGs, projections are used to evaluate the denotations of subgraphs in each context, but more information must be considered: the nesting structure, the types of contexts, and the relations attached to the contexts. Figures 9 and 10 illustrate an important special case in which there are no negations, the nesting struture is the same, and the corresponding contexts have the same types and attached relations. For that case, the denotation is true if the subgraph of Figure 9 in each context has a projection into the corresponding subgraph of Figure 10. The evaluation starts from the outside and moves inward:
By supporting multiple levels of nesting, NGMs can represent structures that are significantly richer than Kripke models. But the intended meaning of those structures and the methods for evaluating denotations raise seven key questions:
Since Peirce developed endoporeutic about thirty years before Tarski, he never related it to Tarski's approach. But he did relate it to the detailed model-theoretic analyses of medieval logicians such as Ockham (1323). Peirce (1885) used model-theoretic arguments to justify the rules of inference for his algebraic notation for predicate calculus. For existential graphs, Peirce (1909) defined endoporeutic as an evaluation method that is logically equivalent to Tarski's. That equivalence was not recognized until Hilpinen (1982) showed that Peirce's endoporeutic could be viewed as a version of game-theoretical semantics by Hintikka (1973). Sowa (1984) used a game-theoretical method to define the model theory for the first-order subset of conceptual graphs. For an introductory textbook on model theory, Barwise and Etchemendy (1993) adopted game-theoretical semantics because it is easier to explain than Tarski's original method. For evaluating NGMs, it is especially convenient because it can accommodate various extensions, such as import conditions and discourse constraints, while the evaluation progresses from one level of nesting to the next (Hintikka & Kulas 1985).
The flexibility of game-theoretical semantics allows it to accommodate the insights and mechanisms of dynamic semantics, which uses discourse information while determining the semantics of NL sentences (Karttunen 1976; Heim 1982; Groenendijk & Stokhof 1991). Veltman (1996) characterized dynamic semantics by the slogan "You know the meaning of a sentence if you know the change it brings about in the information state of anyone who accepts the news conveyed by it." Dynamic semantics is complementary to Hintikka's game-theoretical semantics and Peirce's endoporeutic.
Although NGMs can accommodate many kinds of relationships that Tarski and Kripke never considered, they remain within the framework of first-order semantics. In principle, any NGM can be translated to a flat NGM, which can be used to evaluate denotations by Tarski's original approach. As an example, Figure 11 shows a flattened version of Figure 10. In order to preserve information about the nesting structure, the method of flattening attaches an extra argument to show the context of each circle and links each box to its containing context by a relation of type IsIn. Coreference links in the NGM are replaced by a three-argument equality relation (EQ), in which the third argument shows the context in which two individuals are considered to be equal.
Figure 11: A flattened version of Figure 10
The conversion from Figure 10 to Figure 11 is similar to the translation from the CG notation with nested contexts to Shapiro's SNePS notation, in which nested contexts are replaced by propositional nodes to which the relations are attached. Both notations are capable of expressing logically equivalent information. Formally, any NGM G=(A,B,C,L,I) can be converted to a flat NGM F=(FA,FB,FC,FL,FI) by the following construction:
The method used to map a nested graph model to a flat model can be generalized to a method for translating a formalism with nested contexts, such as conceptual graphs, to a formalism with propositional nodes but no nesting, such as SNePS. In effect, the nesting is an explicit reprsentation of Tarski's stratified metalevels, in which higher levels are able to state propositions about both the syntax and semantics of propositions stated at any lower level. When two or more levels are flattened to a single level, additional arguments must be added to the relations in order to indicate which level they came from. The process of flattening demonstrates how a purely first-order model theory is supported: propositions are represented by nodes that represent individual entities of type Proposition. The flattened models correspond to a Tarski-style model, and the flattened languages are first-order logics, whose denotations can be evaluated by a Tarski-style method.
Although nested contexts do not increase the theoretical complexity beyond first-order logic, they simplify the language by eliminating the extra arguments needed to distinguish contexts in a flat model. The contexts also separate the metalevel propositions about a context from the object-level propositions within a context. That separation facilitates the introduction of Dunn's semantics into the langauge:
The import rules for copying information compensate for the possibly incomplete information in a context. To use the terms of Reiter (1978), a context represents an open world, in contrast to Hintikka's maximally consistent model sets, which represent closed worlds. Computationally, the infinite model sets contain far too much information to be comprehended or manipulated in any useful way. A context is a finite excerpt from a model set in the same sense that a situation is a finite excerpt from a possible world. Figure 12 shows mappings from a Kripke possible world w to a description of w as a Hintikka model set M or a finite excerpt from w as a Barwise and Perry situation s. Then M and s may be mapped to a McCarthy context C.
Figure 12: Ways of mapping possible worlds to contexts
From a possible world w, the mapping to the right extracts an excerpt as a situation s, which may be described by the propositions in a context C. From the same world w, the downward mapping leads to a description of w as a model set M, from which an equivalent excerpt would produce the same context C. The symbol |= represents semantic entailment: w entails M, and s entails C. The ultimate justification for the import rules is the preservation of the truth conditions that make Figure 12 a commutative diagram: the alternate routes through the diagram must lead to logically equivalent results.
The combined mappings in Figure 12 replace the mysterious possible worlds with finite, computable contexts. Hintikka's model sets support operations on well-defined symbols instead of imaginary worlds, but they may still be infinite. Situations are finite, but like worlds they consist of physical or fictitious objects that are not computable. The contexts in the lower right of Figure 12 are the only things that can be represented and manipulated in a digital computer. Any theory of semantics that is stated in terms of possible worlds, model sets, or situations must ultimately be mapped to a theory of contexts in order to be computable.
The discussion so far has addressed the first four of the seven key questions on page xx. The next section addresses the last three questions, which involve the kinds of verbs that express mental attitudes, the ontological status of the entities they represent, the roles of the agents who have those attitudes, and the methods of reasoning about those attitudes.
Models and worlds have been interpreted in many different ways by people who have formulated theories about them. Some have used models as surrogates for worlds, but Lewis, among others, criticized such "ersatz worlds" as inadequate. In a paper that acknowledged conversations with Lewis, Montague (1967) explained why he objected to "the identification of possible worlds with models":
...two possible worlds may differ even though they may be indistinguishable in all respects expressible in a given language (even by open formulas). For instance, if the language refers only to physical predicates, then we may consider two possible worlds, consisting of exactly the same persons and physical objects, all of which have exactly the same physical properties and stand in exactly the same physical relations; then the two corresponding models for our physical language will be identical. But the two possible worlds may still differ, for example, in that in one everyone believes the proposition that snow is white, while in the other someone does not believe it.... This point might seem unimportant, but it looms large in any attempt to treat belief as a relation between persons and propositions.Montague's objection does not hold for the NGM illustrated in Figure 10, Which includes entity #2 of type Believe and entity #6 of type Want. Such a model can explicitly represent a situation in which one person believes a proposition and another doesn't. But the last sentence by Montague indicates the crux of the problem: his models did not include entities of type Believe. Instead, he hoped to "treat belief as a [dyadic] relation between persons and propositions."
In that same paper, Montague outlined his method for reducing "four types of entities — experiences, events, tasks, obligations — to [dyadic] predicates." But he used those predicates in statements governed by modal operators such as obligatory:
Obligations can probably best be regarded as the same sort of things as tasks and experiences, that is, as relations-in-intension between persons and moments; for instance, the obligation to give Smith a horse can be identified with the predicte expressed by 'x gives Smith a horse at t'. We should scrutinize, in this context also, the notion of partaking of a predicate. Notice that if R is an obligation, to say that x bears the relation-in-intension R to t is not to say that x has the obligation R at t, but rather that x discharges or fulfills the obligation R at t. But how could we say that x has at t the obligation R? This would amount to the assertion that it is obligatory at t that x bear the relation-in-intension R to some moment equal to or subsequent to t.All of Montague's paraphrases are attempts to avoid saying or implying that there exist entities of type Obligation. To avoid that implication, he required any sentence with the noun obligation to be paraphrased by a sentence with the modal operator obligatory:
Peirce had a much simpler and more realistic theory. For him, thoughts, beliefs, and obligations are signs. The types of signs are independent of any mind or brain, but the particular instances — or tokens as he called them — exist in the brains of individual people, not in an undefined accessibility relation between imaginary worlds. Those people can give evidence of their internal signs by using external signs, such as sentences, contracts, and handshakes. In his definition of sign, Peirce (1902) emphasized its independence of any implementation in proteins or silicon:
I define a sign as something, A, which brings something, B, its interpretant, into the same sort of correspondence with something, C, its object, as that in which itself stands to C. In this definition I make no more reference to anything like the human mind than I do when I define a line as the place within which a particle lies during a lapse of time. (p. 235)In terms of Dunn's semantics, an obligation is a proposition used as a law that determines a certain kind of behavior. If Jones has an obligation to give Smith a horse, there exists some sign of that proposition — a contract on paper, sound waves in air, or some neural excitation in a brain. The semantics of the sign is independent of the medium, but critically dependent on the triadic relation, which adds an interpretant B to the dyad of sign A and object C. The interpretant is another sign, which is essential for determining the modality of how A relates to B.
In 1906, Peirce introduced colors into his existential graphs to distinguish various kinds of modality and intentionality. Figure 4, for example, used red to represent possibility in the EG for the sentence You can lead a horse to water, but you can't make him drink. To distinguish the actual, modal, and intentional contexts illustrated in Figure 8, three kinds of colors would be needed. Conveniently, the heraldic tinctures, which were used to paint coats of arms in the middle ages, were grouped in three classes: metal, color, and fur. Peirce adopted them for his three kinds of contexts, each of which corresponded to one of his three categories: Firstness (independent conception), Secondness (relative conception), and Thirdness (mediating conception).
Throughout his analyses, Peirce distinguished the logical operators, such as ∧, ~, and ∃, from the tinctures, which, he said, do not represent
...differences of the predicates, or significations of the graphs, but of the predetermined objects to which the graphs are intended to refer. Consequently, the Iconic idea of the System requires that they should be represented, not by differentiations of the Graphs themselves but by appropriate visible characters of the surfaces upon which the Graphs are marked.In effect, Peirce did not consider the tinctures to be part of logic itself, but of the metalanguage for describing how logic applies to the universe of discourse:
The nature of the universe or universes of discourse (for several may be referred to in a single assertion) in the rather unusual cases in which such precision is required, is denoted either by using modifications of the heraldic tinctures, marked in something like the usual manner in pale ink upon the surface, or by scribing the graphs in colored inks.Peirce's later writings are fragmentary, incomplete, and mostly unpublished, but they are no more fragmentary and incomplete than most modern publications about contexts. In fact, Peirce was more consistent in distinguishing the syntax (oval enclosures), the semantics ("the universe or universes of discourse"), and the pragmatics (the tinctures that "denote" the "nature" of those universes).
Classifying contexts. Reasoning about modality requires a classification of the types of contexts, their relationships to one another, and the identification of certain propositions in a context as laws or facts. Any of the tinctured contexts may be nested inside or outside the ovals representing negation. When combined with negation in all possible ways, each tincture can represent a family of related modalities:
Multimodal reasoning. As the multiple axioms for modal logic indicate, there is no single version that is adequate for all applications. The complexities increase when different interpretations of modality are mixed, as in Peirce's five versions of possibility, which could be represented by colors or by subscripts, such as ◊_{1}, ◊_{2}, ..., ◊_{5}. Each of those modalities is derived from a different set of laws, which interact in various ways with the other laws:
_{3}◊_{1}p ⊃ ◊_{1}p.
By introducing contexts, McCarthy hoped to reduce the proliferation of modalities to a single mechanism of metalevel reasoning about the propositions that are true in a context. By supporting a more detailed representation than the operators ◊ and , the dyadic entailment relation and the triadic legislation relation support metalevel reasoning about the laws, facts, and their implications. Following are some implications of Peirce's five kinds of possibility:
{} = {p:Proposition | (∀a:Agent)(∀x:Entity)legislate(a,p,x)}.The empty set is the set of all propositions p that every agent a legislates as a law for every entity x.
SubjectiveLaws(a) = {p:Proposition | know(a,p)}.That principle of subjective possibility can be stated in the following axiom:
(∀a:Agent)(∀p:Proposition)(∀x:Entity) (legislate(a,p,x) ≡ know(a, x|=p)).For any agent a, proposition p, and entity x, the agent a legislates p as a law for x if and only if a knows that x entails p.
LawsOfNature = {p:Proposition | (∀x:Entity)legislate(God,p,x)}.If God is assumed to be omniscient, this set is the same as everything God knows or SubjectiveLaws(God). What is subjective for God is objective for everyone else.
- CommonKnowledge(a,b) = SubjectiveLaws(a) ∩ SubjectiveLaws(b).
Obligatory(x) = {p:Proposition | (∃a:Agent)(authority(a,x) ∧ legislate(a,p,x)}.This interpretation, which defines deontic logic, makes it a weak version of modal logic since consistency is weaker than truth. The usual modal axioms p⊃p and p⊃◊p do not hold for deontic logic, since people can and do violate laws.
To relate events to the agents who form plans and execute them, Bratman (1987) distinguished three determining factors: beliefs, desires, and intentions (BDI). He insisted that all three are essential and that none of them can be reduced to the other two. Peirce would have agreed: the appetitive aspect of desire is a kind of Firstness; belief is a kind of Secondness that relates a proposition to a situation; and intention is a kind of Thirdness that relates an agent, a situation, and the agent's plan for action in the situation. To formalize Bratman's theory in Kripke-style model structures, Cohen and Levesque (1990) extended Kripke's triples to BDI octuples of the form (Θ,P,E,Agnt,T,B,G,Φ):
The list of features in the BDI octuples is a good summary of the kinds of information that any formalization of intentionality must accommodate. But it also demonstrates the limitations of Kripke-style models in comparison to the more general nested graph models:
Barwise, Jon, & John Etchemendy (1993) Tarski's World, CSLI Publications, Stanford, CA.
Bratman, Michael E. (1987) Intentions, Plans, and Practical Reason, Harvard University Press, Cambridge, MA.
Cohen, Philip R., & Hector J. Levesque (1990) "Intention is choice with commitment," Artificial Intelligence 42:3, 213-261.
Dunn, J. Michael (1973) "A truth value semantics for modal logic," in H. Leblanc, ed., Truth, Syntax and Modality, North-Holland, Amsterdam, pp. 87-100.
Groenendijk, Jeroen, & Martin Stokhof (1991), "Dynamic Predicate Logic", Linguistics and Philosophy 14:1, pp. 39-100.
Heim, Irene R. (1982) The Semantics of Definite and Indefinite Noun Phrases, PhD Dissertation, University of Massachusetts, Amherst. Published (1988) Garland, New York.
Hilpinen, Risto (1982) "On C. S. Peirce's theory of the proposition: Peirce as a precursor of game-theoretical semantics," The Monist 65, 182-88.
Hintikka, Jaakko (1963) "The modes of modality," Acta Philosophica Fennica, Modal and Many-valued Logics, pp. 65-81.
Hintikka, Jaakko (1973) Logic, Language Games, and Information, Clarendon Press, Oxford.
Hintikka, Jaakko, & Jack Kulas (1985) The Game of Language: Studies in Game-Theoretical Semantics and its Applications, D. Reidel, Dordrecht.
Hughes, G. E., & M. J. Cresswell (1968) An Introduction to Modal Logic, Methuen, London.
Kamp, Hans (1981) "Events, discourse representations, and temporal references," Langages 64, 39-64.
Kamp, Hans, & Uwe Reyle (1993) From Discourse to Logic, Kluwer, Dordrecht.
Karttunen, Lauri (1976) "Discourse referents," in J. McCawley, ed., Syntax and Semantics vol. 7, Academic Press, New York, pp. 363-385.
Kripke, Saul A. (1963) "Semantical analysis of modal logic I," Zeitschrift für mathematische Logik und Grundlagen der Mathematik 9, 67-96.
Kripke, Saul A. (1965) "Semantical analysis of modal logic II: Non-normal modal propositional calculi," in J. W. Addison, Leon Henkin, & Alfred Tarski (1965) The Theory of Models, North-Holland Publishing Co., Amsterdam, pp. 206-220.
Lewis, David K. (1986) On the Plurality of Worlds, Basil Blackwell, Oxford.
McCarthy, John (1977) "Epistemological problems of artificial intelligence," Proceedings of IJCAI-77, reprinted in J. McCarthy, Formalizing Common Sense, Ablex, Norwood, NJ.
McCarthy, John (1993) "Notes on formalizing context," Proc. IJCAI-93, Chambéry, France, pp. 555-560.
Montague, Richard (1967) "On the nature of certain philosophical entities," originally published in The Monist 53 (1960), revised version in Montague (1974) pp. 148-187.
Montague, Richard (1970) "The proper treatment of quantification in ordinary English," reprinted in Montague (1974), pp. 247-270.
Montague, Richard (1974) Formal Philosophy, Yale University Press, New Haven.
Ockham, William of (1323) Summa Logicae, Johannes Higman, Paris, 1488. The edition owned by C. S. Peirce.
Peirce, Charles Sanders (1880) "On the algebra of logic," American Journal of Mathematics 3, 15-57.
Peirce, Charles Sanders (1885) "On the algebra of logic," American Journal of Mathematics 7, 180-202.
Peirce, Charles Sanders (1902) Logic, Considered as Semeiotic, MS L75, edited by Joseph Ransdell, http://members.door.net/arisbe/menu/LIBRARY/bycsp/L75/ver1/l75v1-01.htm
Peirce, Charles Sanders (1906) "Prolegomena to an apology for pragmaticism," The Monist, vol. 16, pp. 492-497.
Peirce, Charles Sanders (1909) Manuscript 514, with commentary by J. F. Sowa, available at http://www.jfsowa.com/peirce/ms514.htm
Prior, Arthur N. (1968) Papers on Time and Tense, revised edition ed. by P. Hasle, P. Øhrstrøm, T. Braüner, & B. J. Copeland, Oxford University Press, 2003.
Quine, Willard Van Orman (1972) "Responding to Saul Kripke," reprinted in Quine, Theories and Things, Harvard University Press,
Roberts, Don D. (1973) The Existential Graphs of Charles S. Peirce, Mouton, The Hague.
Shapiro, Stuart C. (1979) "The SNePS semantic network processing system," in N. V. Findler, ed., Associative Networks: Representation and Use of Knowledge by Computers, Academic Press, New York, pp. 263-315.
Shapiro, Stuart C., & William J. Rapaport (1992) "The SNePS family," in F. Lehmann, ed., Semantic Networks in Artificial Intelligence, Pergamon Press, Oxford.
Sowa, John F. (1984) Conceptual Structures: Information Processing in Mind and Machine, Addison-Wesley, Reading, MA.
Sowa, John F. (1995) "Syntax, semantics, and pragmatics of contexts," in Ellis et al. (1995) Conceptual Structures: Applications, Implementation, and Theory, Lecture Notes in #AI 954, Springer-Verlag, Berlin, pp. 1-15.
Sowa, John F. (2000) Knowledge Representation: Logical, Philosophical, and Computational Foundations, Brooks/Cole Publishing Co., Pacific Grove, CA.
Tarski, Alfred (1933) "Pojecie prawdy w jezykach nauk dedukcynych," German trans. as "Der Wahrheitsbegriff in den formalisierten Sprachen," English trans. as "The concept of truth in formalized languages," in Tarski, Logic, Semantics, Metamathematics, second edition, Hackett Publishing Co., Indianapolis, pp. 152-278.
Thomason, Richmond H. (2001) "Review of Formal Aspects of Context edited by Bonzon et al.," Computational Linguistics 27:4, 598-600.
Veltman, Frank C. (1996), "Defaults in Update Semantics," Journal of Philosophical Logic 25, 221-261.
Last Modified: