Laws, Facts, and Contexts:
Foundations for Multimodal Reasoning

John F. Sowa

Abstract.  Leibniz's intuition that necessity corresponds to truth in all possible worlds enabled Kripke to define a rigorous model theory for several axiomatizations of modal logic. Unfortunately, Kripke's model structures lead to a combinatorial explosion when they are extended to all the varieties of modality and intentionality that people routinely use in ordinary language. As an alternative, any semantics based on possible worlds can be replaced by a simpler and more easily generalizable approach based on Dunn's semantics of laws and facts and a theory of contexts based on the ideas of Peirce and McCarthy. To demonstrate consistency, this paper defines a family of nested graph models, which can be specialized to a wide variety of model structures, including Kripke's models, situation semantics, temporal models, and many variations of them. An important advantage of nested graph models is the option of partitioning the reasoning tasks into separate metalevel stages, each of which can be axiomatized in classical first-order logic. At each stage, all inferences can be carried out with well-understood theorem provers for FOL or some subset of FOL. To prove that nothing more than FOL is required, Section 6 of this paper shows how any nested graph model with a finite nesting depth can be flattened to a conventional Tarski-style model. For most purposes, however, the nested models are computationally more tractable and intuitively more understandable.

This is a talk that was presented at the Φlog Conference at the University of Roskilde, Denmark, on 4 May 2002.




Summary

Semantics based on laws, facts, and contexts...






Topics Leading to the Summary






Laws and Facts






Dunn's Innovation

Define accessibility R from a world w1 to a world w2 to mean that the laws L1 are a subset of the facts M2:

R(w1,w2) ≡ L1M2.

Implications:

Result:  All modal and multimodal reasoning can be treated as metalevel reasoning about the laws and facts.




Contexts

Next step is to define a theory of contexts that clearly separates the metalevel from the object level.






Peirce's Contexts

Peirce used an oval to group or quote a proposition to be discussed:






Negated Contexts

Since pq is equivalent to ~(p ∧ ~q), Peirce used a nest of two negations to represent implication:






Kamp's Discourse Representation Structures

For his discourse representation theory, Hans Kamp independently developed an isomorphic notation for representing context-dependent indexicals:






Modal Contexts

In 1906, Peirce introduced colors or shading to represent modal contexts:

"You can lead a horse to water, but you can't make him drink."






Modal Contexts

Corresponding conceptual graph:

"You can lead a horse to water, but you can't make him drink."






Multiple Modalities

English Sentence:  "Tom believes that Mary wants to marry a sailor."






Changing Scope

English sentence:  "There is a sailor that Tom believes Mary wants to marry."






McCarthy's Contexts

John McCarthy introduced the predicate isTrueIn(C,p) to say that p is true in context C:

To combine Dunn's semantics with McCarthy's contexts, introduce another predicate isLawOf:

Truth is now a context-dependent indexical.




Tarski's Metalevels






Example

Illustrate metalevel reasoning on a sample English sentence:

Joe said "I don't believe in astrology, but they say that it works even if you don't believe in it."

  1. Mark indexicals with the # symbol, and mark nested contexts with square brackets:
    Joe said
      [#I don't believe [in astrology]
        but #they say
          [[#it works]
            even if #you don't believe [in #it]]].
    

  2. Resolve indexicals:  #I = Joe; "they say" = "every person believes"; #it = astrology; #you = "every person".
    Joe said
      [Joe doesn't believe [astrology works]
        but every person x believes
          [[astrology works]
            even if x doesn't believe
              [astrology works] ]].
    

  3. If Joe was sincere, he believes what he said, and a statement of the form "p even if q" implies p.
    Joe believes
      [Joe doesn't believe [astrology works]
        and every person x believes
          [astrology works] ].
    

  4. Substitute "Joe" for x in "every person x":
    Joe believes
      [Joe doesn't believe [astrology works]
        and Joe believes [astrology works] ].
    

  5. Substitute p for "Joe believes [astrology works]":
    Joe believes [p ∧ ~p].
    





Nested Graph Models






Specializing NGMs

Nested graph models can be specialized for a wide variety of purposes:






Situations



Mapping possible worlds to situations and contexts






Flattening the Nest

To demonstrate that nothing more than FOL is being used, it is possible to flatten any NGM to a conventional Tarski-style model:

  1. Observe that contexts are a syntactic device for partitioning the name space for individuals and relations.

  2. It is possible to assign a unique name to every context.

  3. Then append the name of any context x as an extra argument to every relation that occurs in x.

  4. Use restricted quantifiers to limit the range of any quantifier that occurs in context x to the entities that appear in x.

  5. Then erase all the context brackets.





Why Flattening is a Bad Idea

The possibility of flattening an NGM shows that the semantics is first-order.

But the act of flattening is usually unwise:






Actuality, Modality, Intentionality



Peirce's classification of contexts






Actuality vs. Modality

Peirce considered two ways of talking about what is actual:

But for modality, Peirce suggested a pad of paper:






Modality

Peirce's classification of 1906:






Intentionality






Summary

Semantics based on laws, facts, and contexts...







Copyright ©2002, John F. Sowa