Keywords

Mathematics Subject Classification (2000).

John Corcoran is a logician best known for his philosophical work, illuminating such central concepts as the nature of inference, the role of logic in epistemology, and the place of model theory. Since about the end of the nineteenth century, if not before, logic itself became more mathematical, due to the influence of mathematicians like Frege, Boole, Schröder, DeMorgan, and others. It is not much of an oversimplification to state that it was the mathematicians who took logic beyond the Aristotelian systems that occupied the middle ages and prompted Kant’s infamous remark that logic began and finished with Aristotle. Nowadays, one cannot achieve competence on matters logical without participating, at some level, in the mathematical side of our subject.

The purpose of this chapter is to sketch Corcoran’s mathematical accomplishments. The following are among his vast array of publications:

[13] Corcoran, J., and G. Weaver, “Logical consequence in modal logic I: natural deduction in S5”, Notre Dame Journal of Formal Logic 10 (1969), 370–384.

[12] Corcoran, J., and J. Herring, “Notes on a semantic analysis of variable binding term operators”, Logique et Analyse 55 (1971), 644–657.

[11] Corcoran, J., and W. Hatcher, J. Herring, “Variable binding term operators”, Zeitschrift für mathematische Logic und Grundlagen der Mathematik 18 (1972), 177–182.

[4] Corcoran, J., “Completeness of an ancient logic”, Journal of Symbolic Logic 37 (1972), 696–702.

[5] Corcoran, J., “Strange arguments”, Notre Dame Journal of Formal Logic 13 (1972), 206–210.

[6] Corcoran, J., “Weak and strong completeness in sentential logics”, Logique et Analyse 59/60 (1972), 429–434.

[14] Corcoran, J., and G. Weaver, “Logical consequence in modal logic II: Some semantic systems for S4”, Notre Dame Journal of Formal Logic 15 (1974), 370–378.

[10] Corcoran, J., W. Frank, and M. Maloney, “String theory”, Journal of Symbolic Logic 39 (1974), 625–637.

[15] Corcoran, J., and S. Ziewacz, “Identity logics”, Notre Dame Journal of Formal Logic 20 (1979), 777–784.

[8] Corcoran, J., “Categoricity”, History and Philosophy of Logic 1 (1980), 187–207.

[9] Corcoran, J., “Information recovery problems”, Theoria 10 (1995), 55–78.

This work covers a lot of ground, over a lot of years. There is no single theme that runs through Corcoran’s mathematical publications. Instead, each piece bears directly on some crucial part or parts of one of his philosophical or pedagogical projects. In his philosophical research, he found himself venturing into new, relatively uncharted waters. Sometimes, certain mathematical results were needed, or at least helpful, and he supplied these himself.

You will notice that there was a steady flurry of activity from 1969 until 1974, and then there was a lull. I came under Corcoran’s tutelage in the Autumn of 1974, when I transferred to philosophy from the PhD program in mathematics at Buffalo. I received my PhD in philosophy in the summer of 1978, at which time the above list resumes, with a short note in 1979 followed by the lengthy piece on categoricity in 1980. So it seems as though Corcoran could not do serious mathematical research and teach me at the same time.

You will note that over half of the above papers are co-authored. In most cases, the co-authors were his students, or former students. This reflects his teaching style. When he gets interested in something, it infected everyone around him. Ideas are bounced around the halls at the department and his home where students are frequent guests. When it comes time to write the material up, he makes his young interlocutors co-author, due to their contributions to the incubation and development of the ideas. The exchanges are so intense that it is not worthwhile trying to figure out who gets credit for what. I learned later that the more normal procedure in philosophy is to reward active, contributing students with an occasional footnote, at most.

When I was re-reading the papers, I found myself constantly stimulated. I kept plenty of paper and pencils handy, to work out details, explore suggested avenues of research, and try out some ideas of my own. It was a delightful, vicarious return to my student days.

A few of the papers bear directly on teaching, some explicitly. Corcoran was never satisfied to “merely” push the frontiers of philosophical and mathematical research. His passion for logic extended beyond the concerns of his colleagues and fellow researchers, to students of the subject, at all levels. He was always searching for better ways to formulate ideas and issues, to make them accessible and exciting to students. This, too, led him to some mathematical work.

The first article in the list, [13] “Logical consequence in modal logic I,” goes with a 1974 sequel [14] “Logical consequence in modal logic II.” Both of these papers are co-authored with George Weaver. They present natural deduction systems for the modal logics S5 and S4, respectively. The early deductive systems of, say, Frege, Russell, Church, Tarski, and Kleene presented what are sometimes called Frege–Church deductive systems. They consist of axioms, all of which are logical truths, and one or perhaps two or three rules of inference: modus ponens, substitution, etc. Nowadays the more common approach, at least in beginning courses, is to present a natural deduction system: one does have very many axioms, usually zero. Instead, the system has rules of inference, typically two for each connective and quantifier—an introduction rule and an elimination rule. These rules allow for the introduction and discharge of assumptions.

Natural deduction pursues another theme that plays heavily in Corcoran’s themes in the philosophy of logic. It provides, or attempts to provide, a model of how people actually reason—when they reason correctly, of course. As Corcoran sees it, logic is a branch of epistemology and so should have something to say about how knowledge is obtained. So we need models of how deductive reasoning advances what is known.

My first logic course, which I took at a summer program for high school students in 1968, used a deductive system much like Frege’s. I was amused to learn that a proof of the tautology \(P\rightarrow P\) took five lines using about 100 characters. I found some beauty in the clever manipulation of the axioms, but that just shows my own warped sense of aesthetics. No one learns the truth of a sentence in the form \(P\rightarrow P\) via that derivation. In a natural deduction system, this tautology is a single application of the introduction rule for the arrow.

It can be argued that the introduction and elimination rules are epistemically primitive, in the sense that they cannot be broken down into smaller steps. This seems to be the key notion underlying Frege’s own development of logic, to provide gap-free chains of reasoning.

Another relevant distinction is one that Corcoran has written on extensively, early and often, and impressed on his students. Some deductive systems are designed to codify and systematize the set of logical truths. These he calls “logistic systems.” Others aim to codify or represent the notion of logical consequence, the relation that holds between a set of premises and a proposition that follows from them. Technically, a Frege–Church system and a natural deduction system can do either job. In a Frege–Church system, we just allow premises into deductions. A formula \(\Phi \) is a consequence of a set \(\Gamma \) if there is a sequence of formulas, each of which is either an axiom, a member of \(\Gamma \), or follows from previous members by one of the rules of inference. Although there is no play with assumptions to be discharged, the notion of consequence is captured, nonetheless. Conversely, in a decent natural deduction system, a formula is logically true if it is the last line of a deduction with no undischarged assumptions.

The equivalence between Frege–Church and natural deduction systems facilitates logical theory. Arguably, it is better to introduce students to logic via natural deduction. The exercises that they do reflect the ways that they actually reason (or should reason). However, the meta-theory sometimes goes smoother with Frege–Church systems. Since there are only a few rules of inference—typically just one—the inductions have cases to deal with.

Still, it seems that Frege–Church systems are more closely allied with the simpler goal of codifying logical truth. Frege, the grandfather of modern logic, had no truck with the modern notion of logical consequence. For Frege, one can only “infer” from truths to truths. Moreover, some Frege–Church systems have rules of inference that are valid for the purpose of codifying logical truth but not valid for a logical consequence. One of these is a rule of substitution, which allows one to uniformly replace a sentence letter with any well-formed formula. So one can go from \(P\rightarrow P\) to \((Q\wedge R)\rightarrow (Q\wedge R)\). This would be a disaster in natural deduction, which is designed to capture the broader notion of logical consequence. In modal logic, another example is the rule of necessitation: from \(\Phi \) infer \(\square \Phi \). This is valid if one is codifying (the modal equivalent of) logical truth. If \(\Phi \) is a logical truth, then, arguably, so is \(\square \Phi \). Clearly, the rule of necessitation is invalid in natural deduction.

It is my impression that by the early 1970s, the overall community of logicians was in transition to the consideration of natural deduction systems. It was, in part, due to Corcoran’s influence that the notion of logical consequence was coming to the fore and, with that, the attention on natural deduction. For a while, modal logic was an exception. Modal logicians continued to use rather convoluted Frege–Church style systems, with rules of substitution and necessitation. The derivations were clever, but the unnatural systems (pun intended) were obscuring genuine insight for many of us. Some could not see the forest for the trees. Moreover, modal logicians were primarily concerned with (modal) logical truth and not with a logical consequence. Corcoran and Weaver rectified that situation, providing a natural and insightful natural deduction system, first for S5 and then for S4. What was needed, of course, were introduction and elimination rules for the necessity operator. The elimination rule is obvious: from \(\square \Phi \), infer \(\Phi \), with the same assumptions. This applies to both S4 and S5.

Say that a formula \(\Phi \) is modal if it is constructed from formulas in the form \(\square \Psi \) and the propositional connectives. The introduction rule for S5 is this: from \(\Phi \), infer \(\square \Phi \) on the same assumptions, provided that all assumptions are modal. This introduction rule provides the proper generalization of the rule of necessitation in Frege–Church systems codifying logical truth. The correct rule of necessitation is the special case of the introduction rule when there are no undischarged premises.

In light of the Kripkean possible-worlds framework, the soundness of the introduction rule for S5 is manifest. In a given structure, a modal formula has the same truth value in all worlds. Suppose that a formula \(\Phi \) follows from a set \(\Gamma \) of modal formulas. If each member of \(\Gamma \) is true at the actual world of a model, then it is true at every world of that model. So \(\Phi \) holds at every world, and so \(\square \Phi \) is true in the actual world (and so in every world) of the model.

The 1969 paper [13] provides soundness and completeness proofs for the system, extending Kripke’s pioneering work in the right direction. It provides elegant Henkin-style models for consistent sets of axioms. One fallout of the techniques is a theorem that iterated modality is dispensable: each formula is logically equivalent to one in which no box falls within the scope of another box.

The introduction rule for S4 is this: from \(\Phi \) infer \(\square \Phi \), provided that the undischarged premises are all of the form \( \square \Psi \). The soundness of this rule follows from the fact that S4 is the logic of frames with a reflexive and transitive accessibility relation. The 1974 sequel [14] provides soundness and completeness results and shows a nice sensitivity to the relationship between weak and strong completeness.

The S4 system captures idealized knowability. In these terms, the introduction rule is manifest. If \(\Phi \) follows from some knowable premises, then \(\Phi \) is itself knowable. The hidden assumption is that the rules of inference for the modal logic are themselves knowable.

In my dissertation, on computability, I found reason to work with the notion of knowability in the context of arithmetic. I had settled on S4 but was having trouble understanding the notion of consequence, due in part to the convoluted systems in the literature. Corcoran gave me a reprint of [14], which I still have. It made all the difference. The quantifier rules I needed became obvious, at least to me. It was not the first time, nor the last time, that I wandered into what I thought was a new area of philosophy or mathematics, only to find that Corcoran was there already.

Corcoran’s paper, entitled “Gaps between logical theory and mathematical practice” [7], published in 1973, concerns features of mathematical practice that were not treated adequately by logical theory at the time. One of these gaps was closed in the next two papers on our list. A “variable-binding term operator” is like a quantifier in that it binds a variable in a formula, but, unlike a quantifier, the result is a term and not another well-formed formula.

A standard example is the definite description operator: If there is only one object that satisfies a formula \(\Phi (x)\), then \(\iota x\Phi (x)\) denotes that object. Another is Hilbert’s epsilon operator, which acts like a choice function: \(\epsilon x\Phi (x)\) denotes an object that satisfies \(\Phi (x)\), if there is one. The third example is the minimalization operator in arithmetic: \(\mu x\Phi (x)\) denotes the smallest number that satisfies \(\Phi (x)\), if there is one. Operators like these appear in mathematics texts, and so it is natural that logicians should be interested in them.

As noted in the “Gaps” article, there are three ways to think of variable-binding term operators. One is to follow Russell’s 1905 account of definite descriptions [23] and provide elaborate translation schemes in order to paraphrase the operator away. On Russell’s view, a definite description is not a singular term. Upon analysis, the definite description disappears. Russell’s example is

the father of Charles II was executed.

This asserts that there was an x who was the father of Charles II and was executed. Now the, when it is strictly used, involves uniqueness … Thus when we say “x was the father of Charles II” we not only assert that x had a certain relation to Charles II, but also that nothing else had this relation … Thus, “the father of Charles II. was executed” becomes:

It is not always false of x that x begat Charles II and that x was executed and that ‘if y begat Charles II, y is identical to x’ is always true of y.”

It is straightforward to render Russell’s analysis into contemporary notation. Let Bxy stand for “x begat y,” let Ex stand for “x was executed,” and let c stand for Charles II. Then “the father of Charles II. was executed” becomes

$$\displaystyle \begin{aligned} \exists x(Bxc \wedge Ex \wedge \forall y(Byc \rightarrow y=x)). \end{aligned}$$

Although rigorous, Russell’s technique is rather awkward. Moreover, it results in various ambiguities concerning the scope of the operator. “The present king of France is not bald” can be read as either the falsehood that “there is exactly one x such that x is the present king of France, and x is not bald” or the truth that “it is not the case that there is exactly one x such that x is the present king of France and x is bald.”

A more straightforward approach to variable-binding term operators is to take expressions that begin with them as genuine singular terms, linguistic items that purport to denote objects. As Russell notes, however, sometimes the denotation fails. What if there was no father of Charles II? In the case of Hilbert’s epsilon operator or the minimalization operator, what are we to make of the cases in which the embedded formula \(\Phi (x)\) is not satisfied by anything? What is the least number n such that \(n\neq n\)?

According to what Corcoran and Herring call the “mathematical” approach, non-denoting singular terms can be formed. It is clear that the informal language of mathematics, not to mention ordinary discourse, has such items. Consider, say, the singular term “\(3\div 0\).” In effect, non-denoting singular terms constitute—or constituted—another potential “gap” between mathematical practice and logical theory.

On the “classical” approach to variable-binding term operators developed in these two papers, a formula headed by a variable-binding term operator is a genuine singular term that always denotes something (i.e., in every model). In the cases where, intuitively, denotation fails, an arbitrary denotation is assigned. So, for example, if there are no \(\Phi \)’s, then \(\mu x\Phi \) is zero, and \(\epsilon x\Phi \) denotes an arbitrary object. Hilbert himself adopted this “classical” approach. He noted that the quantifiers are definable in terms of the \(\iota \)-operator. For example, \(\exists x\Phi (x)\) is equivalent to \(\Phi (\iota x\Phi (x))\).

Another option developed in these papers is to treat variable-binding term operators as nonlogical, in the sense that their meaning varies from model to model. As far as I can tell, however, nothing serious turns on this.

The 1971 paper [12] improves on a previous analysis of variable-binding term operators. Corcoran and Herring carefully and clearly show what goes wrong with the prior theory and then show how to fix it. The 1972 sequel [11] settles a conjecture proposed in the first paper, showing that the resulting system is sound and complete. The authors show how to extend standard techniques for proving completeness to cover the new terms. Since terms are syntactically complex, embedding formulas, the inductions get a bit tricky.

Newton da Costa [16] and da Costa and Christopher Mortensen [17] develop the theory further, extending it to modal logic, higher order logic, and the like. The extent to which this extensive work builds on Corcoran and Herring’s is made clear throughout. One extension of the notion is to cases where the resulting “terms” do not denote objects in the range of first-order variables, but rather higher type items. The paradigm is the now-standard \(\lambda \)-notation. If x is a first-order variable and \(\Phi \) a formula containing x free, then \(\lambda x\Phi \) is a property (or predicate). For each term t, \(\lambda x\Phi [t]\) is equivalent to the result of substituting t for free occurrences of x in \(\Phi \).

A potential application of higher order variable-binding term operators occurs in the abstractionist neo-logicist program initiated by Bob Hale and Crispin Wright (see [20]). The basic plan is to develop branches of established mathematics using abstraction principles in the form:

$$\displaystyle \begin{aligned} \forall a \forall b(\Sigma(a)=\Sigma(b) \leftrightarrow E(a,b)), \end{aligned}$$
(ABS)

where a and b are variables of a given type (typically individual objects or properties), \(\Sigma \) is a higher order operator, denoting a function from items of the given type to objects in the range of the first-order variables, and E is an equivalence relation over items of the given type.

Frege [18, 19] employed three principles in the form (ABS). One of them, used for illustration, comes from geometry:

The direction of \(l_1\) is identical to the direction of \(l_2\) if and only if \(l_1\) is parallel to \(l_2\)

The second was dubbed N\({ }^=\) by Wright [28] and is now called Hume’s principle:

$$\displaystyle \begin{aligned} (\#F = \#G) \leftrightarrow (F\approx G), \end{aligned}$$

where \(F\approx G\) is an abbreviation of the second-order statement that there is a one-to-one relation mapping the F’s onto the G’s. Hume’s principle states that the number of F is identical to the number of G if and only if F is equinumerous with G. Frege’s key works [18, 19] contain the essentials of a derivation of the Dedekind–Peano postulates from Hume’s principle. This deduction, now called Frege’s theorem, reveals that Hume’s principle entails that there are infinitely many natural numbers. It is generally agreed that this is a powerful mathematical theorem. Who would have thought that so much could be derived from such a simple, obvious truth about cardinality?

The third example is the infamous Basic Law V:

$$\displaystyle \begin{aligned} (EF = EG) \leftrightarrow \forall x(Fx\leftrightarrow Gx). \end{aligned}$$

Unlike Hume’s principle, of course, Basic Law V is inconsistent.

An alternative to abstraction principles would be to invoke variable-binding term operators. Basic Law V, for example, would become the scheme:

$$\displaystyle \begin{aligned} (Ex\Phi = Ex\Psi) \leftrightarrow (\forall x\Phi \leftrightarrow \forall x\Psi), \end{aligned}$$

one instance for each pair of formulas each with x free. There are first-order and second-order versions of the resulting theory.

Hume’s principle would be

$$\displaystyle \begin{aligned} (\#x\Phi \leftrightarrow \#x\Psi) \leftrightarrow \text{EQU}(\Phi,\Psi), \end{aligned}$$

where EQU(\(\Phi ,\Psi \)) is the (second-order) statement that the \(\phi \)’s can be mapped one-to-one onto the \(\Psi \)’s.

The model theory of the variable-binding versions of the theories is quite different from the abstractionist counterparts. In the former, the range of the operators (number and extension) is limited to properties that are definable in the relevant language, while, at least with standard semantics, abstraction principles apply to every property on the domain. The deductive strength of the variable-binding versions is also weaker, since there are no quantifiers ranging over the relevant terms—extensions or numbers, for example. Apparently, there is no way to define what it is to be an extension or a number in the variable-binding versions. So the argument leading to Russell’s paradox cannot be formulated in the variable-binding version of the Fregean theory of extensions. Da Costa explores some consequences of this version of Frege’s Basic Law V. It would be interesting to explore the consequences of the variable-binding version of Hume’s principle. Evidently, one can show that there are infinitely many numbers, but as far as I know, it is open just how much arithmetic can be recaptured in the variable-binding theory.

The next item in our list is [4], Corcoran’s “Completeness of an ancient logic.” He had previously argued that the logical system developed in Aristotle’s Prior Analytics is best understood as a natural deduction system. Timothy Smiley [25] came to a similar view independently. Before this work, the prevailing wisdom was that Aristotle had presented a logicist system, codifying logical truths with complex forms, like “if P is predicated of all Q and Q is predicated of some R, then P is predicated of some R.” Corcoran and Smiley see the relevant text as aimed at inference, in this case from “P is predicated of all Q” and “Q is predicated of some R” to “P is predicated of some R.” Corcoran and Smiley thus see Aristotle as concerned more directly with correct reasoning.

In Chapter 2 of Book 1 of the Prior Analytics, Aristotle writes:

A syllogism is a discourse in which, certain things having been supposed, something different from the things supposed results of necessity because these things are so. By “because these things are so,” I mean “resulting through them” and by “resulting through them,” I mean “needing no further term from outside in order for the necessity to come about.”

Corcoran provides a straightforward reading of this as giving an account of logical consequence. As it happens, on the Corcoran–Smiley reading, there are no logical truths in Aristotle’s system. It has no rules for discharging assumptions, and propositions in forms like “P is predicated of all P” cannot be formulated in the system.

There is a longstanding view that logical consequence is a matter of form. As far as I know, Aristotle does not explicitly say this, but his work in logic seems to presuppose it. He sometimes presents “syllogisms” by just giving the forms of the propositions in them. Moreover, to show that a given conclusion does not follow from a given pair of premises, Aristotle typically gives an argument in the same form with true premises and false conclusion. It is straightforward to interpret these passages as presupposing that if an argument is valid, then every argument in the same form is valid.

Aristotle’s text thus at least suggests a formal language, along with a deductive system and a model-theoretic semantics. Corcoran provides details. He interprets the predicate symbols—the Greek letters—to stand for non-empty sets (in light of so-called existential import). Once things are put this clearly, the questions of soundness and completeness come immediately to mind, at least to the contemporary reader. Sometimes major conceptual breakthroughs come from simply formulating the system in the right way. In this case, once the proper interpretation, and thus formalization, is in place, the questions of soundness and completeness are readily answered. Soundness is immediate and completeness is straightforward. Corcoran’s research on this topic is a triumph of mathematical logic as an instrument to objectively understand and evaluate logical theorizing of former times.

Although Corcoran does not make it explicit, the agenda of his note “Strange arguments” [5] is to counter the arguments of relevance logicians. Along the lines of the above passage from Aristotle, it is widely held today that an argument is valid if and only if it is not possible for its premises to be true and its conclusion false—where this necessity is understood in terms of the meaning of the logical terminology and the form of the argument. This is sometimes glossed as follows: An argument is valid if and only if its conclusion is true under every interpretation in which the premises are true. The model theory is based on such notions.

Consider the following arguments:

The plane will land in either Los Angeles or San Francisco.

The plane will not land in Los Angeles.

The plane will not land in San Francisco.

Therefore, the parallel postulate is true.

A Republican will win the next Presidential Election.

Therefore, if there is life on Mars, then there is life on Mars.

Both arguments are valid, according to the above definitions. For the first, there is no interpretation that makes the premises true and the conclusion false just because there is no interpretation that makes the premises true. For the second, there is no interpretation that makes the premises true and the conclusion false just because there is no interpretation that makes the conclusion false.

As logic teachers know, students introduced to logic typically balk when they are told, or even shown, that arguments like these are valid. How can an argument be valid—how can the reasoning be good—when the premises have nothing to do with the conclusion? How can we reason from propositions about airports to a proposition from pure geometry, for example? A dedicated and passionate school of logicians attempts to sustain this intuition (see, for example, [1, 2], or [22]). The above arguments, they claim, are fallacies of relevance, and they work to design deductive systems free of such “fallacies.” The conception of validity, they claim, requires the premises to be relevant to the conclusion.

Relevant deductive systems are paraconsistent, in that they reject the validity of the inference that anything follows from a contradiction, sometimes (inaccurately) called ex falso quodlibet:

from \(\Phi \) and \(\neg \Phi \), infer \(\Psi \).

There is, however, a short deduction of this conclusion from the premises, using apparently valid inferences, due to C. I. Lewis: from \(\Phi \), infer \(\Phi \vee \Psi \), by \(\vee \)-introduction. And from \(\Phi \vee \Psi \) and \(\neg \Phi \), infer \(\Psi \), by disjunctive syllogism. Each relevance logician finds fault with this little argument. Most of them reject disjunctive syllogism, and a small minority of them reject \(\vee \)-introduction. And at least one of them accepts both \(\vee \)-elimination and disjunctive syllogism and demurs from the transitivity of deduction [27]. These philosophical moves clash with firmly held intuitions.

The inferences in the Lewis argument seem to be beyond reproach. Most logicians, including Corcoran, accept the Lewis argument and have learned to live with the validity of ‘irrelevant” inferences, accepting the foregoing account of validity. There are no fallacies of relevance.

The result proved in the present note diagnoses the situation, at least from the perspective on the non-relevance logician. It shows why students are baffled at arguments like the above. Corcoran defines an argument form, in a sentential language, to be “strange” if it is valid, but its premises have no sentence letters in common with its conclusion. A set of premises is “strange” if there is no interpretation in which it is true and a conclusion is “strange” if there is no interpretation on which it is false. Corcoran then easily shows that every strange argument either has strange premises or a strange conclusion. This suggests that the “fault” with arguments like the above—one of the so-called fallacies of relevance—lies with their premises or their conclusions and not the notion of validity used in assessing them.

In the same article, Corcoran sketches the generalization of the main result to a predicate language, and he relates the result to some deeper theorems like the Craig interpolation lemma. The article is short and modest, but careful and complete.

Our next item [6] is also a short note. It bears on Corcoran’s distinction, noted above, between codifying the collection of logical truths in a given formal language and codifying the valid inferences in the indicated formal language. Logicians have thus formulated two notions of soundness and two notions of completeness. A logic is weakly sound if every theorem (with no undischarged premises) is logically true, and a logic is weakly complete if every logical truth in the formal language is a theorem. A logic is strongly sound, or simply sound, if every deducible argument is valid and strongly complete, or simply complete, if every valid argument form in the relevant language is deductible.

Since a sentence is logically true if and only if it is a logical consequence of the empty set, strong soundness implies weak soundness and strong completeness implies weak completeness. Suppose that a more or less standard deductive system sanctions the so-called deduction theorem, either as the primitive rule of \(\rightarrow \)-introduction or as a derived rule: If \(\Gamma ,\Phi \vdash \Psi \), then \(\Gamma \vdash (\Phi \rightarrow \Psi )\). Then weak soundness implies strong soundness. Suppose that a more or less standard deductive system is compact and has the rule of \(\rightarrow \)-elimination, sometimes called modus ponens. Then weak completeness implies strong completeness.

Typically, weak and strongly sound soundness stand or fall together, and weak and strong completeness stand or fall together. Aristotelian logic is both strongly complete and strongly sound, as above; sentential logic and first-order predicate logic are both strongly complete and weakly complete, and second-order logic with standard semantics is neither strongly nor weakly complete. One notable exception is first-order logic augmented with a quantifier “there are uncountably many.” This logic is weakly complete but not strongly complete.

The paper provides a clever technique to pull weak completeness and strong completeness apart. Corcoran provides a method that weakens the deriving power of a deductive system, so to speak, while leaving intact its ability to prove theorems. If one begins with a strongly (and weakly) complete deductive system, the result is weakly complete but not strongly complete. Given the nature of the method, the foregoing properties are obvious—no fancy techniques are needed to establish them. The idea, in effect, is to restrict the rules of inference to syntactically complex formulas. Thus, for example, modus ponens is not sanctioned in its full generality, in the resulting system. The result is that the inference \(A \vdash A\) is not derivable.

In “String Theory” [10], Corcoran, Frank, and Maloney bring together a number of different themes in the philosophy of logic and mathematics. The structural affinity between natural numbers and strings of characters has been fruitfully exploited for some time. The advent of mathematical rigor in logic led to the ability to treat formal systems themselves as mathematical. The structural connection between natural numbers and strings suggested that techniques from arithmetic could be employed.

Consider, for example, the Hilbert program (e.g., [21]). Finitary mathematics was regarded as safe, free from philosophical and substantial logical presuppositions. But what is the subject matter of finitary arithmetic? Hilbert went to far as to identify natural numbers with certain strings, claiming that our acquaintance with such things is manifest and unproblematic. Following Kant, Hilbert held that knowledge of strings is a presupposition of all knowledge. So there is no more secure standpoint than that of finitary arithmetic. It is the basis from which we can assess any other endeavor.

The legitimacy of a branch of higher mathematics, which does not enjoy this firm foundational status, is gained by formalizing the branch and then showing, in finitary arithmetic, that it is consistent. The major insight was that the statement that a formal deductive system is consistent is itself equivalent to a statement about natural numbers, and such statements should be amenable to proof in finitary arithmetic. Although the incompleteness theorem showed that the foundational dream could not be accomplished for any theory as strong as arithmetic, the structural insights remained. The proof theory is a flourishing branch of logic and philosophy.

Logicians with an eye toward rigor thus set out to axiomatize the theory of strings (on a finite alphabet). Two such axiomatizations gained prominence. One has the characters of the alphabet as primitive singular terms and employs a concatenation function (or relation). The other begins with the null string and has a finite number of character-prefixing operations. All of the theories are second-order, including an appropriate induction axiom.

That is one theme or group of themes that is addressed in [10]. Another relates to Corcoran’s longstanding interest in definition. Let \(T_1\) and \(T_2\) be theories, possibly with different primitives. Say that \(T_1\) is interpretable in \(T_2\) if there is an effective function f from the sentences of \(T_1\) to the sentences of \(T_2\), such that if \(\Phi \) is an axiom of \(T_1\), then \(f\Phi \) is a theorem of \(T_2\). Usually it is also required the function commutes with the connectives and that it preserves the primitive rules of inference.

Theories \(T_1\) and \(T_2\) are mutually interpretable if each is interpretable in the other. In an earlier abstract, Corcoran announced a result that mutual interpretability is a rather weak relation and does not represent the intuitive notion of equivalence.

The two theories are definitionally equivalent, which was then called “synonymous,” if they are mutually interpretable and, in addition, each theory can recapture the “definition” of its sentences in the other language. That is, a given sentence, in either language, should be provably equivalent to the result of translating the sentence into the other language and then translating the result back.

Formally, let f be the function that interprets \(T_1\) in \(T_2\) and let g be the function that interprets \(T_2\) in \(T_1\). Then if \(\Phi \) is any sentence in the language of \(T_1\), then \(\Phi \leftrightarrow gf\Phi \) should be a theorem of \(T_1\), and if \(\Psi \) is any sentence in the language of \(T_2\), then \(\Psi \leftrightarrow fg\Psi \) should be a theorem of \(T_2\).

If the theories \(T_1\) and \(T_2\) do not share any vocabulary, then they are definitionally equivalent if there is a third theory \(T_3\) which contains the primitive vocabulary of both and can be gotten from each by suitable definitions. That is, \(T_3\) is an extension by definitions of both \(T_1\) and \(T_2\).

One of the main results of [10] is that the aforementioned axiomatizations of string theory are definitionally equivalent to each other and that each is definitionally equivalent to second-order Peano arithmetic. For example, a string theory formulated with n primitive characters (invoking concatenation) is definitionally equivalent to a string theory with m character-prefixing operations, even if \(n\neq m\), and each such theory is definitionally equivalent to Peano arithmetic.

This provides a rigorous background to the above-noted structural connection between strings and natural numbers. Moreover, the categoricity of each of the string theories follows from the categoricity of Peano arithmetic.

Another sub-theme of [10] is the role of second-order logic in the foundations of mathematics and logic. As noted, the various string theories are second-order, and the various meta-theorems all make play with the comprehension scheme. As indicated by the limitative theorems, there is no question of categoricity for first-order theories.

This brings us to the period when Corcoran was too busy teaching me to publish any serious mathematics. The hiatus ends with the appearance of a short note, Corcoran and Ziewacz [15], “Identity logics.” This paper is purely pedagogical. It presents a logic—formal language, deductive system, and model theory—for a language whose formulas are identities and non-identities and sentences in the form \(a=b\) and \(a\neq b\). There are no connectives or quantifiers. Soundness and completeness results are readily obtained.

The note proposes that such systems can be helpful, alongside either the more traditional Aristotelian or sentential systems, in introducing students to the basics of logic and meta-logic. Identity logics directly highlight features relevant to logical theorizing, with less in the way of distractions.

The extensive paper “Categoricity” [8] is a delightful symphony of philosophical insight and argument, rigorous mathematics and meta-mathematics, and historical reflection. Its purpose is to outline the importance of categoricity in the history and philosophy of mathematics. A categorical characterization marks a success in the goal of describing a mathematical structure up to isomorphism—which is the best one can do with a formal language anyway.

As is well known, however, no infinite structure has a categorical characterization in a first-order language. So resources beyond first-order languages are needed for the purpose of describing an infinite structure up to isomorphism. This, of course, raises a host of philosophical issues concerning the role and nature of logic.

The paper delimits a minimal semantic system that allows for categorical formulations of mathematical structures like that of the natural numbers and the real numbers. Corcoran’s “slightly augmented first-order languages” have a single, monadic predicate variable which can only occur free in a formula or, equivalently, can only be bound by a universal quantifier whose scope is the entire formula. In my book on second-order logic [26], I explore the expressive resources of languages that are only slightly stronger, allowing arbitrary free second-order variables. Central mathematical notions, like finitude and minimal closure, can be defined, or at least axiomatized, in such languages. The indicated logic is weakly complete, but not strongly complete.

Arguably, languages of this strength are needed to formulate the semantics of ordinary, first-order languages. In effect, the standard definitions of logical truth and logical consequence treat the nonlogical terminology as free, higher order variables. Another key result in [8] is that a theory consisting of an induction principle plus axioms that settle the truth-value of every atomic formula is categorical. A number of central applications are noted.

Of course, there is more to logic than the description of structures. A central, and perhaps more basic, function is the codification of the truths of a given theory. For this, the central component is a deductive system. The incompleteness theorem entails that any theory that succeeds in describing a nontrivial infinite structure up to isomorphism fails to codify all of the truths of the theory, provided only that the deductive system be effective. So there is always a gap between the goal of description and the goal of the codification of truths. Corcoran shows that the gap can be quite large, going beyond the rather obscure sentences invoked in the incompleteness theorems. He provides a categorical axiomatization of arithmetic in which it cannot be shown that zero is not a successor.

The final item in our list [9] was written for students. Suppose we have a valid argument from a single premise \(\Phi \) to a conclusion \(\Psi \). Suppose also that \(\Psi \) does not entail \(\Phi \). Then some information has been “lost” in the inference from \(\Phi \) to \(\Psi \). The chapter explores the problem of recovering—or articulating—the information that was lost. It provides a number of ways of looking at the issue and a number of resolutions to it.

As with many of the papers in our list, this one furthers Corcoran’s philosophical agenda. In this case, the relevant item is his formulation of the notion of logical consequence in terms of information. The idea is that an argument is valid if the information in its conclusion is contained in the combined information in the premises. Of course, a key element of this program is to articulate the notion of the information contained in a proposition or a set of propositions. A tautology, for example, contains no information, and a contradiction contains all information.

In Chapter 18 of [24], Russell wrote that

mathematics and logic, historically speaking, have been entirely distinct studies … But both have developed in modern times: logic has become more mathematical and mathematics has become more logical. The consequence is that it has now become wholly impossible to draw a line between the two; in fact the two are one … The proof of their identity is, of course, a matter of detail.

Of course, there are not many logicists left anymore, but there is indeed a grain of truth in Russell’s statement. Alonzo Church [3], who was not a logicist, wrote that “logic and mathematics should be characterized, not as different subjects, but as elementary and advanced parts of the same subject” (p. 332). Corcoran’s mathematical work provides all the witness one would need for the grain of truth underlying logicism.