Responses to Five Questions on Signs and Meaning

John F. Sowa

This article is a preprint of a chapter in the book Signs and Meaning:  5 Questions, edited by Peer Bundgaard and Frederik Stjernfelt, Automatic Press, New York, 2009.

1. Why were you initially drawn to the theory of signs and meaning?

I had been interested in science and language since I was a child. The science came from my father, who had studied chemical engineering and gave clear scientific answers to all my questions. The interest in language came from my maternal grandmother, who spoke only Polish at home. At MIT and Harvard, I majored in mathematics, but I also studied languages and philosophy. I spent 30 years working on research and development projects at IBM, and by focusing on artificial intelligence and computational linguistics, I was able to combine all my interests while still doing work that was useful to the company.

In my studies of philosophy, I knew of Peirce only as a friend of William James. At Harvard, I took some courses in logic, but I never heard anything about Peirce from the philosophers there, despite the fact that his manuscripts were buried in the Harvard library. In 1978, I finally came across an article about Peirce’s existential graphs by Martin Gardner in the mathematical games section of the Scientific American. I immediately noticed a similarity between Peirce’s existential graphs and my earlier article on conceptual graphs (Sowa 1976). In my first book (Sowa 1984), I redefined the logical foundations for conceptual graphs in terms of existential graphs. From that initial attraction to Peirce’s logic, I have been continuing my studies of his semeiotic and its relationship to all branches of cognitive science.

2. What do you consider your contribution to the field?

In my work in artificial intelligence, I have been trying to relate the enormous power and flexibility of language to the mathematical precision required for science. But research in philosophy, linguistics, and AI has been polarized between the “scruffies” and the “neats”. Those terms were coined by Roger Schank, who proudly called himself a scruffy because of his often ad hoc computational methods for addressing the complexities of ordinary language. He denounced the logic-based methods of the neats, such as Richard Montague, as irrelevant for linguistics and AI. Although I admired the precision of logicians such as Carnap, Quine, and Montague, I realized that the cognitive mechanisms must be flexible and that absolute precision is a highly unusual special case. My solution was to develop conceptual graphs as a notation for logic with a continuous range of precision. At one extreme, CGs are as formal as Montague’s logic, but they can be used in approximations that are as scruffy as Schank’s. The key innovation is not in the CG notation itself, but in the methods for relating CGs to background knowledge.

Before I began to study Peirce’s writings, the two philosophers who had the strongest influence on me were Whitehead and Wittgenstein. Like Peirce, both of them had a strong background in logic, mathematics, and science, but they appreciated the full complexity of language. Another influence was Pike’s Unified Theory of Human Behavior, which addressed the distinction between the etic (continuous) and emic (discrete) aspects of all modes of language and behavior. Those influences led me to develop methods for relating the rigid notations of mathematical logic to the flexible, but vague aspects of natural languages. In my first book (Sowa 1984), I devoted Chapter 2 to a survey of cognitive psychology that emphasized the issues of perceiving and interacting with a continuous world and talking about it in terms of discrete words. The concluding paragraph of Section 2.3 captures the essential point:

Advocates of AI, who concentrate on the discrete aspects, are optimistic about the prospects for simulating intelligence on a digital computer. Critics who concentrate on the continuous forms maintain that simulation of intelligence by digital means is impossible. Since the human brain uses both kinds of processes, a complete simulation may require some combination of digital and analog means.

The final chapter of that book, “Limits of Conceptualization,” surveyed “the continuous aspects of the world that cannot be adequately expressed in discrete concepts and conceptual relations.” More recently, I used the term knowledge soup (Sowa 2000, 2005) to describe the complexity of what people have in their heads. Whitehead (1937) aptly characterized the problem:

Human knowledge is a process of approximation. In the focus of experience, there is comparative clarity. But the discrimination of this clarity leads into the penumbral background. There are always questions left over. The problem is to discriminate exactly what we know vaguely.
The poet Robert Frost (1963) suggested a solution:
I’ve often said that every poem solves something for me in life. I go so far as to say that every poem is a momentary stay against the confusion of the world.... We rise out of disorder into order. And the poems I make are little bits of order.

Logic and poetry are complementary disciplines that use analogy to find relevant knowledge and assemble it in a tightly structured proof or poem. All methods of formal reasoning — deduction, induction, and abduction — are disciplined special cases of analogy (Sowa & Majumdar 2003). But as Peirce observed, discipline is “purely inhibitory. It originates nothing” (CP 5.194). Yet discipline is necessary to prune away irrelevant or misguided excess. To support high-speed reasoning, both formal and analogical, Majumdar invented algorithms for mapping discrete conceptual graphs to and from continuous geometric fields. Two aspects of those algorithms are critical for meeting the challenge of knowledge soup:  First, their speed enables them to simulate an associative memory that can store and retrieve arbitrary volumes of background knowledge. Second, with varying constraints on the mapping, the reasoning can be as vague or precise as appropriate for any given application. The tight constraints of generalization and specialization support the disciplined methods of deduction, induction, and abduction. Looser constraints can be used for analogies at any degree of vagueness. By tightening the constraints in incremental steps, a reasoning engine can systematically tailor a vague guess to a precise solution.

3. What is the proper role of a theory of signs and meaning in relation to other academic disciplines?

Peirce convinced me that a theory of signs is the proper foundation for cognitive science, which includes philosophy, psychology, linguistics, anthropology, neuroscience, and artificial intelligence. Some people have suggested that neuroscience might someday provide a suitable foundation for the other branches, and others have suggested that AI would. But both of those views are misguided. Neuroscience and AI have strongly influenced other branches, including each other. Yet both of them have been guided by the branches that study the external effects of cognition:  psychology, linguistics, and anthropology. Since the same topics can be studied from different points of view, cognitive science, by its nature, must be interdisciplinary.

Language affects and is affected by every aspect of cognition. Only one topic is more pervasive than language:  signs in general. Every cell of every organism is a semiotic system, which receives signs from the environment, including other cells, and interprets them by generating more signs, both to control its own inner workings and to communicate with other cells of the same organism or different organisms. The brain is a large colony of neural cells, which receives, generates, and transmits signs to the cells of the complete organism, which is an even larger colony. Every publication in neuroscience describes brains and neurons as systems that receive signs, process signs, and generate signs. Every attempt to understand those signs relates them to other signs from the environment, to signs generated by the organism, and to theories of those signs in other branches of cognitive science. The meaning of the neural signs can only be determined by situating neuroscience within a more complete theory that encompasses every aspect of cognitive science.

Philosophy is considered the foundation for all other subjects, but philosophy itself has many branches, some of which are more fundamental than others. Aristotle called metaphysics first philosophy because it studies the nature of being itself. Yet the first six books of the Aristotelian corpus, the organon or instrument for carrying out any philosophical or scientific study, present Aristotle’s theory of signs. Metaphysics is a prerequisite for science, but an understanding of signs is a prerequisite for studying anything, including metaphysics.

In short, neuroscience is one component of the larger field of cognitive science, whose ultimate foundation is the theory of signs. Like neuroscience, artificial intelligence relates cognitive signs to lower-level signs, which happen to be the data structures and operations of computer systems. In another galaxy, living things might have a totally different biology and neurophysiology, but all their life processes must be governed by signs. For all forms of life, evidence for the meaning of the internal signs comes from external signs of an organism interacting with its environment. For human life, psychology, linguistics, and anthropology study the external signs. Understanding the relationships between levels can clarify many issues, but it cannot “reduce” the external to the internal.

4. What do you consider the most important topics and/or contributions in the theory of meaning and signs?

The single most important contribution was Peirce’s integration of the theories by the Greeks and Scholastics with modern logic, science, and philosophy. Aristotle laid the foundation in his treatise On Interpretation. His opening paragraph relates language to internal affections (pathêmata), whose existence is not in doubt, but whose nature is unknown:

First we must determine what are noun (onoma) and verb (rhêma); and after that, what are negation (apophasis), assertion (kataphasis), proposition (apophansis), and sentence (logos). Those in speech (phonê) are symbols (symbola) of affections (pathêmata) in the psyche, and those written (graphomena) are symbols of those in speech. As letters (grammata), so are speech sounds not the same for everyone. But they are signs (sêmeia) primarily of the affections in the psyche, which are the same for everyone, and so are the objects (pragmata) of which they are likenesses (homoiômata). On these matters we speak in the treatise on the psyche, for it is a different subject. (16a1)

In this short passage, Aristotle introduced ideas that have been adopted, ignored, revised, rejected, and dissected over the centuries. By using two different words for sign, he recognized two distinct ways of signifying:  sêmeion for a natural sign and symbolon for a conventional sign. With the word sêmeion, which was used for omens and for symptoms of a disease, Aristotle implied that the verbal sign is primarily a natural sign of the mental affection or concept and secondarily a symbol of the object it refers to.

In the last sentence of that paragraph, Aristotle noted that the study of the psyche is a distinct, but related topic. That point is key to Aristotle’s success in avoiding the dangers of psychologism. Any system that interprets signs is affected by those signs and must therefore have some internal affections. Aristotle called such systems psyches, and he assumed that the affection must have some likeness (homoiôma) to the external object. That assumption would be just as true of the “psyche” of a robot that relates linguistic signs to images of the environment.

The triad of sêmeion, pathêma, and pragma forms a meaning triangle, which Ogden and Richards (1923) drew explicitly. Although they didn’t draw triangles, the Scholastics were far ahead of Ogden and Richards. Their Latin terms for the triad were signum, significatio, and suppositio. They originally followed Aristotle in saying that the signification was an affection (passio animae), but they also called it a mental concept (conceptus mentis). They extended Aristotle’s point that written signs are symbols of the spoken to a more general theory about signs of signs. They adopted the term prima intentio for a triad whose supposition is a real or imaginary physical object, and secunda intentio for a triad whose supposition is another sign. At the same time, they began to think of concepts less as likenesses (similtudines) than as language-independent signs of things (signa rerum). With this shift, all nodes of the meaning triangle became signs or even signs of signs. In logic, they combined Aristotle’s syllogisms with a propositional logic that included a version of De Morgan’s laws. An important achievement was Ockham’s Summa Logicae, which included a model-theoretic semantics for Latin. Ockham wasn’t as formal as Tarski, but he went beyond Tarski by stating truth conditions for temporal, modal, and causal propositions. He also went beyond Russell by accommodating suppositions of fictional things, such as a chimera, or intended things that did not yet exist.

Peirce had studied the Greek and Scholastic theories in depth and boasted of having the largest collection of medieval manuscripts on logic in the Boston area. He combined their innovations with the categories of Firstness, Secondness, and Thirdness, which he had discovered by analyzing the relationships implicit in Kant’s table of twelve categories. Unlike Aristotle, whose categories are the most general types of entities, Peirce used his triad in a metalevel procedure for generating new triads by subdividing signs of any kind. Instead of two interlocking triangles for first and second intentions, Peirce could apply his method to any node of any triangle to spawn another triangle. Peirce also introduced new ideas that went beyond the Scholastic theories. Among them is the principle of continuity, which led him to the conclusion that the precision of logic is the goal of analysis, not the starting point:

Get rid, thoughtful Reader, of the Okhamistic prejudice of political partisanship that in thought, in being, and in development the indefinite is due to a degeneration from a primal state of perfect definiteness. The truth is rather on the side of the Scholastic realists that the unsettled is the primal state, and that definiteness and determinateness, the two poles of settledness, are, in the large, approximations, developmentally, epistemologically, and metaphysically. (CP 6.348)
According to Peirce, the meaning of a symbol grows during the stages of learning and use, both in science and in everyday life. He recognized that a formal logic, in which every symbol has a single precise meaning, is valuable for recording the results of analysis. But he also realized that such a language, by itself, cannot support novelty and creativity. It would be unusable for learning, planning, discovery, negotiation, and persuasion.

5. What are the most important open problems in this field and what are the prospects for progress?

The most important problem is to correct the “grave errors” (schwere Irrtümer) that Wittgenstein (1953) recognized in the framework he had adopted from his mentors, Frege and Russell. One of the worst was the view that logic is superior to natural languages and should replace them for scientific purposes. Frege (1879), for example, hoped “to break the domination of the word over the human spirit by laying bare the misconceptions that through the use of language often almost unavoidably arise concerning the relations between concepts.” Russell shared Frege’s negative view of natural language, and both of them inspired Carnap, the Vienna Circle, and most of analytic philosophy. Some philosophers who had read Wittgenstein’s later work and commented on it favorably continued to preach the same grave errors. Dummet (1981:316), for example, still claimed that vagueness was “an unmitigated defect of natural language.” Dummet (1993:170) also said that Austin’s work on speech acts “was harmful and pushed people in the wrong direction.” During a dialog, however, the language games can change, and the symbols can grow in continuous and unpredictable ways. In a written text, the author plays language games with the reader and develops those games during the exposition. Even a textbook on mathematics shifts games from explanations and applications to conjectures, proofs, counterexamples, and exercises. In a narrative, the characters play language games with each other. Contrary to Chomsky, language competence is the ability to recognize, invent, and play those games.

Unlike Frege and Russell, Peirce had a high regard for language, and instead of trying to reform it, he did his best to understand it. A crucial experience came in the late 19th century, when he was employed as an associate editor of the Century Dictionary. During that period, he wrote, revised, or edited over 16,000 definitions — more than any other editor of that dictionary and much more than most philosophers of language accomplish in a lifetime. The combined influence of logic and lexicography is evident in a letter he wrote to the general editor, B. E. Smith:

The task of classifying all the words of language, or what’s the same thing, all the ideas that seek expression, is the most stupendous of logical tasks. Anybody but the most accomplished logician must break down in it utterly; and even for the strongest man, it is the severest possible tax on the logical equipment and faculty.

As logicians, Peirce, Whitehead, and Wittgenstein were as good or better than Frege, Russell, and Carnap. The former, however, embraced vagueness as the starting point for analysis, but the latter tried to build a fortress that would exclude any possibility of vagueness. Unfortunately, their fortress is a fragile glass house that collapses at the first contradiction. Some logicians tried to develop formal logics of fuzziness and ambiguity, but what they built is a metalevel glass house to protect the object-level glass. Some pioneers in formal semantics, such as Kamp (2001) and Partee (2005), admitted that logic alone is not sufficient to solve the problems, but they had no alternative to offer. Peirce never rejected logic, but he had a more encompassing system:

Whitehead and Wittgenstein accepted most of these principles in one form or another, but every one of them was ignored, deliberately rejected, or considered a defect by Frege and his followers. Wittgenstein’s language games, for example, are compatible with Peirce’s principle of pragmatism, context dependencies, the idea that symbols grow, and the minor role of deduction in language understanding and use. The willingness to accept vagueness is an implicit recognition of continuity, but Peirce emphasized it explicitly. A promising approach by Thom and Wildgen (1982, 1994) derives the discrete structures of language and logic from continuous fields. The elegant crystals of logic are like diamonds that form in a continuous flow of magma.

As an application of his categories, Peirce recognized that the language arts of grammar, logic, and rhetoric are a clear example of his triadic principles. He generalized all three fields to more general approaches that included natural languages as well as the formalisms of mathematics and symbolic logic. To avoid the connotations of the traditional fields, Morris renamed the three terms of that triad as syntax, semantics, and pragmatics. During the 20th century, natural language syntax was studied in depth, but the fields of semantics and pragmatics were fragmented as competing approaches to a confused mass of language-related phenomena. Although Peirce himself is no longer available, his method can still be used to find order in that chaos.

Semantics, loosely speaking, is the study of meaning, but the meaning triangle has three sides, and different studies typically emphasize one side or another:  the link from words to the concepts they express; the link from words and sentences to objects and truth values; or the link from concepts to percepts of objects and actions upon them. Instead of integrating all three sides in a single subject with different aspects, linguists usually narrow their focus to competing, one-sided approaches named lexical semantics, formal semantics, and cognitive semantics:

  1. Lexical semantics addresses the link between words and concepts. It follows Saussure’s definition of language (langue) as “the whole set of linguistic habits, which allow an individual to understand and be understood” (1916). Lexicographers analyze a corpus of contextual citations and catalog the linguistic habits in lexicons, thesauri, and terminologies.

  2. Formal semantics bypasses the concept node of the triangle and relates words and sentences directly to objects and configurations of objects. An alternate name, derived from the formalism, is model-theoretic semantics. Although some linguists developed versions of formal semantics, most of the proponents come from philosophy and computer science. Yet despite 40 years of sustained research, none of the computer implementations can translate one page from an ordinary textbook to any version of logic.

  3. Cognitive semantics relates language-independent concepts to perception and action in a social context. Linguists who specialize in cognitive semantics often collaborate in interdisciplinary studies with psychologists and anthropologists. Among them are Lakoff (1987), Langacker (1999), Talmy (2000), and Wierzbicka (1996).

Pragmatics or rhetoric analyzes the language games. Like semantics, pragmatics can be studied from different perspectives:  the structure of a text or discourse; the intentions of the author or speakers; or the social function of a game in the culture. Unlike the single semantic triad, the intentions of two or more participants in a social setting can entangle the pragmatic triad with multiple triads and subtriads. The plots of literary and historical narratives illustrate the complexity that can develop from a clash of perspectives and motivations. Much more research is needed to analyze all these relationships, but a Peircean approach provides the vocabulary and framework.


Aristotle, The Categories, On Interpretation, Prior Analytics, Harvard University Press, Cambridge, MA. (Quotation translated by J. F. Sowa)

Dummet, Michael (1981) The Interpretation of Frege’s Philosophy, Duckworth, London.

Dummett, Michael (1993) Origins of Analytical Philosophy, Harvard University Press, Cambridge, MA.

Frege, Gottlob (1879) Begriffsschrift, English translation in J. van Heijenoort, ed. (1967) From Frege to Gödel, Harvard University Press, Cambridge, MA, pp. 1-82.

Frost, Robert (1963) A Lover’s Quarrel with the World, film, WGBH Educational Foundation, Boston.

Kamp, Hans (2001) “Levels of linguistic meaning and the logic of natural language,”

Morris, Charles W. (1938) Foundations of the Theory of Signs, Chicago University Press, Chicago.

Ockham, William of (1323) Summa Logicae, Johannes Higman, Paris, 1488. (The edition owned by C. S. Peirce)

Ogden, C. K., & I. A. Richards (1923) The Meaning of Meaning, Harcourt, Brace, and World, New York, 8th edition 1946.

Partee, Barbara H. (2005) “Formal semantics,” Lectures at a workshop in Moscow.

Peirce, Charles Sanders (CP) Collected Papers of C. S. Peirce, ed. by C. Hartshorne, P. Weiss, & A. Burks, 8 vols., Harvard University Press, Cambridge, MA, 1931-1958.

Pike, Kenneth A. (1967) A Unified Theory of Human Behavior, 2nd edition, Mouton, The Hague.

Sowa, John F. (1976) “Conceptual graphs for a data base interface,” IBM Journal of Research and Development 20:4, 336-357.

Sowa, John F. (1984) Conceptual Structures: Information Processing in Mind and Machine, Addison-Wesley, Reading, MA.

Sowa, John F. (2000) Knowledge Representation: Logical, Philosophical, and Computational Foundations, Brooks/Cole Publishing Co., Pacific Grove, CA.

Sowa, John F. (2005) “The Challenge of Knowledge Soup,” in J. Ramadas & S. Chunawala, Research Trends in Science, Technology, and Mathematics Education, Homi Bhabha Centre, Mumbai, pp. 55-90.

Sowa, John F., & Arun K. Majumdar (2003) “Analogical reasoning,” in A. de Moor, W. Lex, & B. Ganter, eds. (2003) Conceptual Structures for Knowledge Creation and Communication, LNAI 2746, Springer, Berlin, pp. 16-36.

Whitehead, Alfred North (1937) “Analysis of Meaning,” Philosophical Review, reprinted in A. N. Whitehead, Essays in Science and Philosophy, Philosophical Library, New York, pp. 122-131.

Wildgen, Wolfgang (1982) Catastrophe Theoretic Semantics:  An Elaboration and Application of René Thom’s Theory, John Benjamins Publishing Co., Amsterdam.

Wildgen, Wolfgang (1994) Process, Image, and Meaning:  A Realistic Model of the Meaning of Sentences and Narrative Texts, John Benjamins Publishing Co., Amsterdam.

Wittgenstein, Ludwig (1953) Philosophical Investigations, Basil Blackwell, Oxford.