(c) 1996 Henry A. Flynt, Jr.
Originally presented at Backworks, NY, April 30, 1982. The essay was not good enough in its 1982 form. Nor is it able to be the basis of the ideal concept-art essay. The purpose of this upgrade is to provide a presentable reflection of the level of the agenda in 1982. Key concurrent manuscripts were
Concact of Colored Sheets and Acoustical Scans (1961; not typed until 1981)
Critical Notes on Personhood (1981-2)
Stagnationism, and Options of Revolution, in Physics (Jan. 1982)
As of 1982, I want to position 1961 concept art relative to the actuality of traditional mathematics. For that reason, I will devote much of this manuscript to the nature of traditional mathematics.
In the twentieth century, a formalist-positivist view concerning the nature of mathematics gained currency. This view derived from Hilbert, and has been pursued in symbolic logic and set theory. Pure mathematics is a formal game of proof trees, played with symbolic tokens, like chess.
... every well-determined mathematical discipline is a calculus in this sense. But the system of rules of chess is also a calculus. The chessmen are the symbols (here, as opposed to those of the word-languages, they have no meaning), the rules of formation determine the position of the chessmen (especially the initial positions in the game), and the rules of transformation determine the moves which are permitted--that is to say, the permissible transformations of one position into another.
Mathematics does not need to have any meaning to have its full import and validity. Mathematical truth, in the sense of whether a proof of a proposition has been produced, is purely a mechanical issue.
The preceding paragraph, written in 1982, was inadequate as an account of the metamathematical perspective and its outcome. The rudiments or nodes of the game are indeed mechanically decidable. Hilbertian mathematics is systematized in terms of impersonal, qualitatively homogeneous, discrete, permanently self-identical, rankable elements; and decidable operations thereupon. However, the most celebrated discoveries of logistic show that the system as a whole is not mechanical. A mechanically decidable game-definition produces an unknowable game. Impredicativity and undecidability.
Also, there is a doctrine of interpretation of calculi: syntax has a twin called semantics. What is not stressed enough in the literature of the field is that contents can only be assigned from the vantage-point of a metalanguage. That means that in principle only natural language can supply contents, since natural language is the only actual metalanguage.
I have recently (1982) come to the view that positivism-formalism utterly misrepresents how (non-metamathematical) mathematicians work or think. It is legitimation by reductionism.
This is not to speak of the truth or falsity of mathematics; that will not be an issue until later in this essay. Also, I am not saying that logistic is less of a discipline than the branches of mathematics, or that it has no applications. I'm saying, to repeat, that logistic misrepresents how "creative" mathematicians work or think. One who trusts logistic or metamathematics as a guide to mathematical thought is led astray.
In the class I took from Loomis at Harvard in 1958, he answered the question whether mathematics was of the nature of chess by saying that he was an excellent chess player; but that mathematics is other than chess, and far more profound. In 1967, at NYU, Peter Ungar commented on my "1966 Mathematical Studies": "You are focusing on the fact that mathematical proofs have a tree structure. But that is an extremely superficial feature of mathematics." Beyond these pronouncements, my acquaintance with the mathematician's mentality owes much to having known Dennis Johnson and John Baez.
What is the mind-set with which the "creative" mathematician works? One believes that there is a Heaven of pre-existing mathematical entities. One's mission is to discover what is going on in this Heaven. The Heaven of pre-existing content: that is the important thing.
As for proof, formalism-positivism misunderstands the role of proof and exaggerates its importance. Yes, proofs are necessary to prevent mathematicians from claiming fantastic results without any warrant. But a mathematical proof of A is no more than a conceptual indication that A is more plausible than ~A. The notion of a formal proof (in logistic) is an irrelevancy to the "creative" mathematician. One can, for example, consult Stephen Kleene, Mathematical Logic (1967), p. 210, for a formal proof of a = a. This long stack of repeated and permuted symbols, some common, some arcane, would be preposterous as a way of making a person know a = a who did not already "know" it. (It doesn't even touch the issue that self-identity has to be referred to reality-type, far more than to chess-like rules: since the rules are useless if there are not objects on earth or in Heaven which are believed to stay identical to themselves.)
K.G. Binmore, in the textbook The Foundations of Analysis, Book 1, makes an observation on proof which should be given close attention.
... although a computer may find formal proofs entirely satisfactory, the human mind needs to have some explanation of the "idea" behond the proof before it can readily assimilate the details of a formal argument. What mathematicians do in practice therefore [some mathematicians--HF] is to write out "informal proofs" which can "in principle" be reduced to lists of sentences suitable for competer ingestion. This may not be entirely satisfactory, but neither is the dreadfully boring alternative.
Here we see the clash between the actuality of mathematics and the official positivist ideology. Note especially the defensiveness of putting 'idea' in quotes. Having ideas is indecent in this positivist-behaviorist culture. (Cf. the famous statement by Ramsey to the effect that mathematics has nothing to do with language, thought, or symbolism. Clearly Ramsey regarded the latter as human foibles.) But the ideas are what the mathematician's actual mission concerns.
If the mathematician is honest with himself, his operative mind-set is probably nearer to mysticism or metaphysics than to a positivist computer. The mathematician wants to see through: to the pre-existing object in mathematical Heaven. If the proof happens to point the right way, that is a bonus. But the proof is not intended as the substance of mathematics. The infinite number of prime numbers is a good example. The important thing is to see through to the meaning of the thesis. There are various proofs, which make a show of less or more rigor, or of allegiance to one or another partisan method. To the "creative" mathematician, which proof one uses, or whether one uses any of them, is insignificant. It was said about Ramanujan, "he arrived at his results by an almost miraculous intuition, often having no idea of how they could be proved or even what an orthodox proof might be like." Moreover, there is the practice of using unproved theorems which are believed to be true as lemmas in other work. Riemann's Hypothesis.
A twentieth-century result furnishes an important lesson. Skolem proved, by a counter-example which was highly non-constructive, that the Peano postulates are not categorical. You have to accept function space to get Skolem's result. Natural numbers are not unique, relative to real analysis. Now you can conclude that the integers are not a categorical object. Or--you can conclude that the axiomatic method is a totally inappropriate and invalid way of approaching mathematical content. The Peano postulates fail to tell us anything profound about the integers.
The mathematics profession didn't accept this at the ideological level because the profession was moving toward an ideology of formalism--an ideology that the game has to be set up mechanically and without semantics. Explicit rejection of the Peano approach would return us to mysticism, it would be felt.
Nevertheless, the "creative" mathematician does bypass the formal-axiomatic approach as shallow. One accepts that one is the natural historian of an actual Heaven of mathematical objects. Like terrestrial reality, the mathematical reality may be too rich to be exhausted by definitions.
Metamathematics' commitment to the syntactical road means that its achievement is only a reductionist simulacrum for mathematics. Metamathematics is estranged from mathematics. It would be like positing John Cage's notion of experimental composition, or organization of sound, as an explication of music--given a musical canon with Beethoven at its center.
Even though the mathematician's practice may presuppose that the mathematical entities pre-exist in Heaven, I don't mean to acquiesce to that tenet. Meanwhile, the formalist triumph over mysticism brings forth a new mysticism. Faced with impredicativity and undecidability, the mathematical realm has acquired a new mystique of unknowability.
The belief in a Heaven of mathematical entities would summarily provide the traditional mathematician with the needed ontology. However, I do not concede that this belief is the operative consideration in the assembly of mathematical content.
We could seek an "anthropology" of mathematics--which would seek to capture what the mathematician does, without needing to coincide with his self-understanding. How the content of traditional mathematics is actually assembled is a subtle and tricky topic. In my view the topic has never been forthrightly acknowledged and pondered. The difficulty is exactly as I proclaimed in "Anti-Mathematics" (1980). Everybody who is intelligent enough to learn some mathematics becomes an ideologue for mathematics, claiming to possess the right mathematics. To ask what is going on independently of the partisanship is not possible.
The key, in my view, is that the actual process is not honestly mystical.
In spite of past resistence to the question, let us ponder the "anthropological" or "process" sources for the reality-characteristics of mathematical objects.
The empirical-pragmatic use of mathematics as in money transactions and surveying and astrology.
surveying and building
At the same time, mathematics is an abstraction not reducible to any pragmatic rationale. Already, counting is abstracted from particular species of things, and from any specific largest integer. Ancient examples: irrational numbers, very large integers, the infinity of primes announced by Euclid.
A second source of the reality-characteristics of the "pre-existing" mathematical objects is principles which are invoked in proofs. The content of the mathematical Heaven is being squeezed partly out of obscure proof principles. Most of this section will be an exploration of this content-supplying role of proofs.
The vast epistemological and ontological role in mathematics of proof by contradiction--which somebody well called truth by elimination. Claims have been made that proofs by contradiction can be replaced by constructivist proofs. But in fact proof by contradiction is the method one sees in mathematical texts and courses. I want to get at the prevailing culture here, and to eschew the practice of invoking a methodological sectarianism as a mask of legitimation. A longish exploration is required.
Proved by contradiction
infinity of primes
Fundamental Theorem of Arithmetic (?)
proof that addition of Dedekind cuts gives a proper cut
what about proof that .2000... = .1999... ??
proof that reals are uncountable -- Cantor's diagonal argument
proof that power set of S is bigger than S
What is suspect is that when people prove A by contradiction, they only ask whether [not]A yields a contradiction. One proves A by showing that [not]A produces a contradiction. If you succeed in thus establishing an intuitively popular thesis, there is great peer coercion from the mathematics fraternity not to investigate the question whether A yields a contradiction. (Both [not]A and A might yield contradictions.)
My point here is not anything as straightforward as claiming achievability of an inconsistency proof for the theory--although classically that would follow. I am charging that mathematical culture is more devious than that. Formal consistency of the system as formal system is a spurious issue, a mere fig leaf. I am suggesting that the entire meta-theory is inconsistent. A flaw could be found in any direction in which the adepts were prepared to be adversarial. Whether to call a particular path consistent or inconsistent is a matter of attitude, or autosuggestive deformation of logical norms.
Let me invent a case. Suppose we accept the traditional culture of proof by contradiction. Suppose further that we are back at the beginning of mathematical history with Pythagoras or Euclid; we cannot deploy twentieth-century conceptions to escape a difficulty. Let me assert:
There are only finitely many positive integers.
Proof by contradiction. Assume an infinite number of positive integers. Write them in this order:
1, 3, 5, ..., 2, 4, 6, ... .
Try to count them:
*1 3 5 ... 2 4 6 ...
1 2 3 ...
There aren't enough whole numbers to count themselves: Contradiction. I claim that
Once a contradiction has been extracted from the assumption that there are infinitely many integers, I forbid you to look for a contradiction from the proposition that there are finitely many integers. Now the finiteness of the number of primes is asssured without further investigation.
Twentieth century mathematics gets around the difficulty
All the important theses of mathematics have the potential to yield contradiction. The profession uses peer coercion to invent a sophistical exoneration for one side of the contradiction, and to enforce it.
Does There are only finitely many positive integers give a contradiction? Supposedly yes, because if n is the largest integer, then there is a different, larger one n + 1. n + 1 > n.
But why not invent a casuistry (or a legitimate conceptualization) which escapes this embarrassment? Note primitive enumeration: one, ..., ten, indefinitely many (notated [mu]). Now it is correct that [mu] + 1 = [mu]; but [mu] [not equal] [infinity]. ([mu] is a finite additive "absorber"; as 0 is a finite multiplicative absorber. That means in turn that if we cancel [mu] in [mu] + 1 = [mu] we get 1 = 0. So we have to add a law forbidding cancellation of "indefinitely many." Etc.)
Ask what contradiction you can extract from these propositions:
1. [root]2 is a number but it is not a quotient of natural numbers.
2. There is an infinity of prime numbers.
One thing we notice here is that the people who made the proofs of irrationality of [root]2 and infinity of the prime numbers did not have a definition for number. Indeed, they explained irrational number with geometrical concepts which fall to pieces when you scrutinize them. The claim that [root]2 is a number has to be based on ratios of line segments (and not any single segment). But how is a ratio of segments to be conceived unless you already have a concept of numbers?--a vicious circle. And how can a segment be measured when irrationals are non-periodic and periodicity is the basis of measurement?
Aside on Brouwer
Brouwer's objection to proof by contradiction was agnostic. [not] [not]A is not strong enough to guarantee A. [not] [not]A is not strong enough to eliminate the undecidable situations. Still, Brouwer proves negatives by contradiction. One proves an element non-existent by proving that the assumption of its existence yields a contradiction.
Invoking the principle There is an infinity of natural numbers. That is, the successor function generates an infinite totality. Using the infinity of natural numbers as the basis of definitions of finite, infinite.
Binmore's definition of infinity: If a set S has n elements for some n in N, S is finite. Otherwise S is infinite.
[The so-called proof of the infinity of natural numbers is a proof that they are constructively open-ended?]
Is infinity an unending process; or is it a finished aggregate which is infinitely big? How would you recognize the latter case?
a) If there are an infinite number of items on a list, and one item that can't be on the list, why does that prove that the set which contains both is a bigger infinity?
b) The infinitely small. The notion that an infinity of numbers or rationals are already squeezed on a one-centimeter segment.
Includes mathematical induction. [Binmore p. 80?]
a. P(1) &. P (r) --> P (r+1)
b. P (1) &. [P(1) & ... & P(r)] --> P (r+1)
Therefore P (n) (course of values induction)
Course-of-values induction is used to prove that every integer larger than 1 can be factored into primes in some way.
The paradoxes of induction (Wang):
prove that all natural numbers are "small"
prove that all stroke numerals are feasible
More on infinity: the Galileo-Bolzano paradox. A 1-1 correspondence can be established between the points of line segments of different length.
Galileo said that the longer segment, [line segment](CD), should have more elements.
The distinction between two types of size: magnitude, plurality.
More on infinity:
Let us disregard my treatment of the example
*1 3 5 ... 2 4 6 ...
1 2 3 ...
The consensus explanation is as follows. 2 has no immediate predecessor. If you try to count these natural numbers with the natural numbers, you run out of counting numbers before you get to 2. [That's a Failure Theorem, as I already stressed.] Mathematics sidesteps this by a new trick, the so-called ordinal numbers for which [infinity] + 1 [not equal] [infinity] + 2. [omega] + 1, [omega] + 2, ... The second number-class. Ordinals are order types.
[1982 exculpation by Hennix. The upper-line numbers and lower-line numbers of
Proof by taking infinitely or even uncountably many steps--often counted as a single step. The standard (topological) proof of the Bolzano-Weierstrass Theorem.
Not just the Axiom of Choice, which was not introduced until the twentieth century. See L.E.J. Brouwer, "On the Domains of Definition of Functions," fn. 8. Hausdorff-Banach-Tarski paradox. Nonstandard integers. Spectral theorem. Consistency of the Continuum Hypothesis.
[Countable and/vs. uncountable relative to Axiom of Choice?]
Transfinite induction -- used by Gentzen (and who else?).
[use of diagonal argument in recursion theory:
sequence with rule for nth member, but not rule for membership in the sequence]
Another "process" source of the reality-characteristics of mathematical objects. Failure Theorems which start a "political" tug-of-war over whether they expose a new species of mathematical objects (in Heaven).
Reinterpret an inconsistency, or other Failure Theorem, as the surfacing of a new species of mathematical objects: [root]2, - 1, [root]-1.
This is another view of material we've already seen. Is [root]2 a number at all? The objection to coordinate geometry was that it would allow a point with coordinates ([root]2, [root]2).
The use of negative numbers was rejected by conservative mathematicians as late as the end of the 18th century. Sometimes the new content revealed by a Failure Theorem gets a practical application first; and only then is a logical rationalization given for it. Infinitesimals (cf. Grattan-Guinness). Hao Wang on Dedekind cuts: thieving versus honest work.
Decimal representation of real numbers encounters the difficulty
.2000... = .1999...
Answers have to be established socially. 1/0; 0/0; 00.
Paradoxes of infinity become definitions of infinity:
The Galileo infinity paradox: bijection between the even integers and the integers.
How to count 1,3,5, ..., 2, 4, 6, ...?
1961 concept art was launched from the contemporary self-representation of mathematics as a formalist-positivist activity. In effect, I looked for mathematics and saw metamathematics. That was not a mistake.
I chanced to have studied mathematics when formalism still had a lot of authority. Even if formalism has subsequently lost that authority, it still is the only explanation of mathematics which gets past the explicit appeal to theology, to the Heaven of ideal entities. Concept art had to proceed from this demystification, or flattening, of mathematics.
In order best to understand the original concept art, we have to orient ourselves retrospectively, and approach metamathematics from the vantage-point provided by concept art. We will discern features of metamathematics (and traditional mathematics to the extent that metamathematics reflects it at all) which traditionally were taken for granted and not noticed.
Let us tentatively accept the Carnap-formalist definition of a mathematical system or calculus. We have individual symbols or primitives, expressions (so-called well-formed formulas), propositions, axioms, rules of deduction, proofs which can be checked mechanically.
We come to a layer of syntax or metamathematics which Carnap was aware of, but which he dismissed as unimportant, trivial. Mathematics, even pure mathematics, is notated, embodied, communicated in physical tokens. If we want to get the full flavor of the ideology, and if we want to comprehend the novelty of concept art, then we must peruse Carnap at length.
... the design (visual form, Gestalt) of the individual symbols is a matter of indifference. ... Further, it is equally unimportant from the syntactical point of view that, for instance, the symbol 'and' should be specifically a thing consisting of printers' ink. If we agreed always to place a match upon the paper instead of that particular symbol, the formal structure of the language would remain unchanged.
It should now be clear that any series of any things will equally well serve as terms or expressions in a calculus, or, more particularly, in a language. ... In the ordinary languages, a series of symbols (an expression) is either a temporal series of sounds, or a spatial series of material bodies produced on paper. An example of a language which uses movable things for its symbols is a card-index system; the cards serve as the object-names for the books of a library, and the riders as predicates designating properties (for instance, 'lent', 'at the book-binders', and such like); a card with a rider makes a sentence.
We have already said that syntax is concerned solely with the formal properties of expressions. ... Assume that two languages S1 and S2 use different symbols, but in such a way that a one-one correspondence may be established between the symbols of S1 and those of S2 so that any syntactical rule about S1 becomes a syntactical rule about S2 if, instead of relating it to the symbols of S1, we relate it to the correlative symbols of S2; and conversely. Then, although the two languages are not alike, they have the same formal structure (we call them isomorphic languages), and syntax is concerned solely with the structure of languages in this sense.
Also, Carnap, thinking himself to be immensely permissive, allowed the rules of formation and transformation to be chosen arbitrarily. It did not occur to him that this could allow systems which challenged the positivist ideology of mathematics itself. Again, we cannot appreciate the vulnerability in this seemingly routine juncture without an extensive look. It would underrate concept art to imagine that these positions are as straightforward as Carnap says they are.
[Logical Syntax of Language, xiv - xv]
[Logical Syntax of Language, 6-7]
[Logical Syntax of Language, 51-52]
Carnap got the Principle of Tolerance from Karl Menger--and what the Principle embodies, really, is Menger's contempt for Intuitionism. Menger deserves some note; he seems to have been a classic, "hard" positivist. As he said,
what matters in mathematics and logic is not which axioms and rules of inference are chosen, but rather what is derived from them. ... All that matters is into which statements certain others can be transformed by the use of given transformation rules.Returning to Carnap, he authored the brave words quoted above without dreaming that you could combine an unexpected real-world notation-medium with unexpected formation/transformation rules to get calculi which explode positivism.
Every mathematical system has a real-world notational medium--physical notation-tokens. I need a name for the facet of a mathematical system consisting in the physical notation and its classification and combinatorics and other consequences. I call this tokenetics.
Traditional mathematics builds its expressions or wffs by strings of discrete tokens, i.e. by permutation and repetition of discrete and, when written, stable notation-tokens.
The question of the physical notating of mathematics was taken up in several classic sources. However, these discussions were perfunctorily conformist, just as Carnap's was. (They are made into a backstairs agenda and are treated perfunctorily and reductionistically.) A few examples are Emil Post in The Undecidable, ed. Martin Davis (1964), p. 420; Alfred Tarski, Logic, Semantics, Metamathematics (1956), footnotes on pp. 156, 174, 282; Hans Reichenbach, Elements of Symbolic Logic, Ch. VII; C.F. von Weizsäcker, The Unity of Nature, pp. 78-9.
Let me now pass to the 1961 standpoint of concept art itself. Concept art turned on my wholesale discrediting of the knowledge-claims of mathematics, correlative to Philosophy Proper. Actually, a negative universal result enters here. A rigorous presentation would have to be carefully positioned to avoid being self-defeating. The present explanation is only heuristic.
I rejected the content claims made for traditional mathematics.--And I rejected the claims for proof-discovery, in a syntactical sense, made both by traditional mathematics and by the new logistic or positivist formalism. Let me expand. For knowledge-claims to be warranted, the relation of a word to its intension would have to be an objective (metaphysically real) relationship. (The tenet of the real, invisible link of token or type to meaning.) Then, for the knowledge-claim that "a proof is discovered" to be warranted, the relation of rule to instance would have to be an objective (metaphysically real) relationship. (It's a strict analogy to the case of the word.) Philosophy Proper nullified such beliefs. (Along with their negations. That is why it is no better than heuristic to announce that "a word does not have an objective relation to its intension.")
Translating into professional jargon, one might say that I attacked the semantics of metamathematics. I saw the pretensions of metamathematics as a displacement of the indefensible pretensions of mathematical Platonism.
Let me dwell for a bit longer on nullifying the knowledge-claims of mathematics. With his famous Second Problem, Hilbert had popularized the notion that the entire question of the validity of mathematics came down to showing that a game whose rudiments were mechanically specifiable would not yield an inconsistent outcome. As Menger says on Hilbert's behalf,
the only criterion of acceptability of a mathematical concept or system is its freedom from contradiction ... .
Replying, my appraisal is as follows. To present the consistency question as the ultimate cognitive issue concerning mathematics is an outrageous piece of misdirection. When you don't take the semantics of the metatheory seriously, then the game is a lie--its consistency is no more than a pseudo-problem. In none of my 1961 or 1962 work did I call explicitly for an inconsistency-derivation in a conventionally accepted mathematical theory.
It is not my topic here, but Hilbert's stance concealed another deception. Even if consistency is used to sieve out the garbage from mathematics, the major ideas are not discarded when they are found inconsistent. The proclamation of consistency as the ultimate test of the worthiness of mathematics was insincere. Hilbert's metaphor of dragging theories before a judicial tribunal and banishing inconsistent theories forever was utter propaganda.
In 1977, I began to consider the possibility of negatively solving Hilbert's Second Problem by lifting the problem to a different level. The acknowledgement of tokenetics figured prominently this exercise. In the 1982 presentation, I updated this line of inquiry: noting that it bore on the knowledge-claims of mathematics (even though it was less abrupt than my original repudiation); and on the perspective of metamathematics to which concept art had given rise. The consistency of pure mathematics is shown to depend on the relation of mathematics to its means of notation, as follows.
According to the formalist ideology of pure mathematics, the numbers need have no real-world meaning whatsoever.
(Aside: In primitive cultures, it is not taken for granted that there exist general numbers which can count all species of things.)
But to the contrary. Even pure mathematics requires numbers with real-world semantics--"applied integers": to count the notation-tokens in which pure arithmetic is embodied.
As soon as we have come so far as to realize that pure arithmetic requires an arithmetic with real-world semantics, I plug this arithmetic into the "Is there language?" trap. I get an inconsistency, or nullification result, for real-world arithmetic. This also means that pure arithmetic is inconsistent or is nullified: at the meta-level, or syntactically.
To digress a little further, the consistency question could contribute to my astute hypocracy program. One might try to convert the result just mentioned into a syntactical inconsistency proof. The target of the exercise would be the tenet that consistency is an unassailable objectivity.
In 1982, I wanted also to show that my 1961 attack on mathematics' cognitive pretensions could be transferred to personhood theory. Above I quoted Binmore to the effect that the essence of mathematics is not computer-style formal proofs, but the ideas. These ideas at the least have to be quasi-semantic consciousness events.
As such, they are nullified by the transfer of the "Is there language?" trap to the person-world. ("Critical Notes on Personhood," Part IV.)
With this preparation, I can now explain how 1961 concept art was positioned. Concept art does not begin until logic and mathematics have been recognized to be "false." Concept art renounces truth-claims as a purpose for mathematical systems. A non-cognitive use of language is permissible in which the reader treats the text like a Rorschach blot. The notion of a syntactical system is borrowed, but the system becomes "tokenetics-rich."
Given that we are occupied with "tokenetics-rich" systems, the goal becomes the uncanniness of the syntax. So the value is a beauty which is non-sentimental. Later I would say that the purpose of the new intellectual modalities is to manifest new mental abilities; to elaborate concepts such that the very possibility of thinking them is a significant feat.
In the 1961 perspective, concept art, as just defined, was intended to supersede all mathematics.
The preliminaries explain why there was an almost exclusive emphasis on syntax in the original concept art. Concept art needed the flattening of mathematics to syntax, so that it could disrupt the enterprise metasyntactically. I utilized the projection of mathematics onto its logical tree-structure because I wished to scramble this tree-structure. If it helps you understand to think of this approach as cruel, you are free to do so.
To repeat, concept art renounces truth-claims as a purpose for mathematical systems. The goal of concept art becomes the uncanniness of the syntax. I could consider that that moved the artifacts decisively into the aesthetic realm. All of mathematics was going to be superseded by an endeavor which was arguably noncognitive.
The aesthetic justification was traditionally beloved by mathematicians. Menger's statement in this connection is notable.
Even if [the deductive mathematics of infinity] should never be capable of application and never provide us with knowledge in some restricted sense of the word, it would still find justification on its overwhelming aesthetic merits, just as does music, which certainly does not provide us with knowledge.
Thus, I was adapting a well-established attitude. Many years later, I was told that "the public doesn't know that mathematicians claim beauty as a value of mathematics." I was told that I had done something illicit in appealing to such an arcane idea. That indeed was one cause of my inaccessibility--that I ranged freely across the walls which separated science people from poetry people, etc.
For completeness, I may mention that there was a brief period when I classified concept art under acognitive culture, my successor to art. In turn, that led me to reject concept art when I rejected art in general. In hindsight, that was too facile. Such pigeon-holing overlooked the potential of concept art as a new intellectual modality, a handhold for change. Admittedly, it was not obligatory to present the new results as "art." Whether I did so came to be a tactical decision--given that the public never even "got its mind around" my goal, and would have preferred that I disappear.
What of content?
If somebody indoctrinated in metamathematics wants to ask where I stood in 1961 on far-out semantics, the answer would refer, perhaps, to "Representation of the Memory of an Energy Cube Organism." "Energy Cube Organism" asks for an overtly inconsistent system of propositions to be given a "concrete" realization: that was the far-out semantics.
In [section]A, I backtracked historically and pondered
(i) the gavotte-in-Heaven which is the tacit practice in mathematics. For example, Ramanujan presumably obtained his results via visions of notional content; one could say the same for many mathematicians.
(ii) On the other hand, concept art devolved from the positivist rationalization.
What is the cross-alignment of (i) and (ii)? The upshot of this question was that I sought to push the research--as it bore on the obsolete pseudo-science called mathematics--in multiple directions. To answer the question with a verdict is unnecessary and premature.
Let me give a summary definition of 1961 concept art which expresses my 1982 understanding--in advance of discussing specific pieces. The 1961 works can be called calculi, subject to the following observations. Truth-claims are not their purpose. There is a very strong interaction between tokenetics and syntax. There is no semantics. In one respect, the syntax is transparent or is displayed all at once. But in another respect, the syntax becomes indeterminate because of the unconventional tokenetics. (In November 1967, I issued a prospectus for a Journal of Indeterminate Mathematical Investigations. Only Dennis Johnson received it equably; other mathematicians found it preposterous.)
This 1982 review of the early concept-art pieces is tedious, and will become more so as I proceed. It may remain "extrinsic," not reaching the crux of the concept-art innovations. Nevertheless, the review has to be preserved, first to ensure that something important doesn't get dismissed, secondly to reflect the 1982 agenda. I have avoided emendations whose only purpose would be to cite advances made after 1982.
Hereafter we view proof theory (and traditional mathematics to the extent that proof theory reflects it at all) in the light of concept art. Let me address the task of defining the constituents of a calculus. Traditional mathematics and proof theory are together referred to as traditional calculi.
Principles labelled only with a number apply to all calculi under consideration. Principles labelled "T" characterize traditional calculi. Principles labelled "CA" characterize concept art systems.
1. A calculus has a real-world notational medium.
2. Formation rules serve to define sentences and axioms.
T1. Traditional calculi use discrete stable graphemes as the individual, sub-sentential elements or signs.
T2. Traditional calculi build sentences or well-formed formulas by permutation and repetition of discrete stable graphemes.
T3. For traditional calculi, a syntactical constituent is recognized (recognized by a subject or self--an aspect never acknowledged) both by recognizing the array of material tokens, and its structure-properties (e.g. the number of tokens of a given kind, the order of tokens of different kinds). The subject must have a doctrine of cardinal numbers--in addition to a notion of "kinds" as properties or classes--to recognize expressions.
T4. For traditional calculi, tokenetics is not mentioned in the formation rules because tokenetics is supposed to be a constant or an aspect that can be divided out. [equivalence-class aspect?]
T5. Because traditional syntactical constituents are permutations from an inventory of repeatable tokens, all syntactical constituents can be repeated at will.
CA1. In concept art, the notational medium may be such that the palpable tokens for all possible propositions are displayed at once--or such that the material tokens for all possible propositions arise through changes over time of a given notational medium. For concept art, it is appropriate the call the notational medium a visual display.
CA2. In concept art, formation rules mention tokenetics as well as syntax because the two are inseparable.
CA3. A concept art notation-token may be constituted of an irreducible interdependency between subjectivity and some external notation-medium or display.
CA4. Concept art propositions typically have no internal compound structure (in the sense in which traditional propositions do have such structure).
CA5. For concept art, cardinality may not be required to recognize syntactical constituents.
3. Transformation rules serve to define "direct" implication and proof.
T6. For traditional calculi, "direct' implication is defined as a mechanically checkable combinatorial property depending on internal structure of sentences. Tokenetics is not mentioned in the definition. Modus ponens.
4. A sentence implies itself.
5. Axioms are implicationally independent.
T7. Traditionally a proof is a discrete--and contingent on that, finite--series of sentences such that each is an axiom or is a deduction from preceding sentences. [directly implied by preceding sentences--modus ponens] A theorem is the last sentence in a proof. The graph-structure of proofs is that of a "tree."
CA6. In concept art, transformation rules mention tokenetics as well as syntax because the two are inseparable.
CA7. In concept art, direct implication may be left undefined, while proof is defined. Such is the above-mentioned indeterminacy.
CA8. In concept art, proofs may not have a tree structure.
CA9. In concept art, series of sentences forming proofs may not be discrete--and no cardinality of such series may be specified (for this or another reason).
Next I shall re-state features of concept art to try to get at what produces uncanny syntax. And also, to get at what could inspire future ramifications.
CA I. The calculi are tokenetically non-trivial, and tokenetics cannot be separated from syntax. This means that "pure" or "formal" properties are explicitly determined by real-world considerations--such as "material" ("physical"), perceptual, and subjective considerations. Whereas positivist ideology wanted to separate formalism from the real world, I have joined formalism to the real world by making syntax and tokenetics (or notation) actively interdependent. We get a family of calculi which have the real world built in at the notational level.
CA Ia. Further, there are inescapable interactions with the subjectivity of the "reader" (the subject or self)--at the notational level. Language cannot any longer be misrepresented as an "it-it relationship"--the subject becomes an active constituent.
CA Ib. The semantics of an applied mathematical calculus could mean--to take elementary cases--enumeration, spatial measurement, physical coordinates. In concept art, a correlative of this sort of content turns up in the zone of tokenetics/syntax. What Carnap considered to be the highest and lowest levels of a "language" have been made to overlap.
CA II. My syntax leaves the question of cardinality of proof-series open, by identifying proofs with real-world change. This gives calculi in which smeary situations become elemental (atomic, primitive). In 3.b below, I will give an admittedly fanciful example which nevertheless can suggest a context which might demand such such a calculus.
I will next analyze four concept art pieces from 1961. For convenience, I will refer to the pieces by short titles, as follows:
Teseqs: "Teseqs" from Optical Audiorecorder (1961)
Panep: "Udpanep Transform Continua" from Optical Audiorecorder (1961)
Illusions: "Concept Art Version of Mathematics System March 26 1961"[*]
Innperseqs: "Innperseqs" (May-July 1961)
I wish to show how these pieces exemplify the principles I have spelled out in the above pages. I will review the Formation Rules and Transformation Rules so as to allow comparisons. I previously covered some of this territory in "1966 Mathematical Studies."
With this section, the 1982 manuscript (for which there is a typescript) becomes entirely inadequate. I wrote the lecture just after revisiting a 1961 piece, the Optical Audiorecorder. One of the early concept-art pieces, "Transformations," had been an optical audiorecorder piece (March 14, 1961); and then had been reframed as a concept-art piece (October 11, 1961). Upon re-examining the rest of the optical audiorecorder pieces in 1982, I decided that I had to extract the concept-art value of two of them.
What I did not address in 1982 was that as Optical Audiorecorder pieces, the two pieces envisioned interactivity (and constitutive dissociation?) as between notation and its registration--when the registering device was a machine. The details are tedious; but as I said, they need to be on record.
Already Optical Audiorecorder was a misnomer. What I was interested in was an Optical Audioplayer. The recordings were colored plates: plane rectangles, 8.5 x 11 inches, encoded in color. The player was to scan a plate continuously left-to-right as a pitch/time graph. Thus, the device was conceptually an analog one (although to make it workable would require a combination of digital systems, presumably). A vertical cross-section defined an instantaneous simultaneity in the audio spectrum--and was to be correspondingly synthesized as sound. There was an intended parallel with magnetic recording tape, which is also a band whose length is strictly proportional to time, and whose cross-sections encode instantaneous simultaneities in the audio spectrum. As to the plate, its length might allow a program up to a half-hour in length. Height of each marking corresponded to pitch. Darkness corresponded to amplitude. Colors were to be assigned to timbres in some fashion. The opaque colored plates would be illumined from within the scanner in a uniform way.
My objective here was not practicality. (To encode a Beethoven symphony in this way would produce a dusting of colored horizontal dashes illegible to the eye: across the lower middle of the plate--assuming that the system left vertical space for the highest audible fundamentals, not used in music until the advent of electronic music. The device would be awkward for transcribing conventional music; on the other hand, it might be well-suited to the modern serious music of the day.) My objective was to explore the logic of interactivity between notation and its mechanical registration.
As I have recounted elsewhere, what gave me the idea was a conversation with Richard Maxfield c. March 1, 1961, in which he told me that the purpose of his electronic music was the sequence of transformations of tape. In that case, each transformation yielded a tape--a stable, scannable stage. Pointedly, Maxfield told me that he didn't care how the tapes sounded.
I imagined my device to scan not like a photocopier, but like a fixed-focus camera. Thus, the device could be operated with a transparent plate, or no plate, and it would synthesize sound corresponding to whatever monocular vision would see beyond the plane of the plate-holder (no doubt in a fuzzy or noisy way). Light-levels sufficient for vision would register with the machine. (A uniformly dark field would be scanned as loud blank noise.) The device's light-sensitivity might be designed to vary when external light entered at the plate-holder.
Again, the plates were like puns between pitch/time graphs and lengths of audio recording tape. Since the device could register images beyond the plane of the plate if not blocked by the plate, ambient gestalts could invade the storage medium during playback.
First there is the display: a blank or white plate (8.5 x 11 inches) with a one-inch square hole in the center. When the plate is viewed broadside with optimal focus at its plane, then whatever is optically or visually detected becomes "the image."
A sentence is the image--so long as the portion at the hole does not change.
Sentence A directly implies sentence B if and only if B is next after A (when the image at the hole changes).
The axiom is the first sentence observed. (In one's lifetime, or in some stipulated time-frame.)
A proof is a terminating series of sentences of which the first is the axiom and each of the others is directly implied by its predecessor in the series. That simply means that the sentences have to be in unbroken temporal sequence in this case.
A theorem is the terminal sentence of a proof.
Comment: Deduction is monolinear.
Comment: The convention that a sentence implies itself is abandoned. The reader may consider use of the term 'proof' to be improper; an alternate is 'derivation'.
It is clear now that when I composed these pieces, my sense for the logic of the situation was not sharp. The thrust of the pieces was ambiguous, and they could be criticized as ill-defined.
In the case of Teseqs, I provided an alternate version which involved human vision and manual manipulation of the "plate," which could be stiff card or plastic. As opposed to mechanical scanning of the plate when set in the plate-holder.
There is nothing to prevent sentences from succeeding each other as a continuum, as with "positions" of a sweeping hand. What is more, if it takes the audioplayer a half-hour to scan a plate, then it is entirely possible that a sentence will be replaced by another, or many others, as it is being scanned. By incorporating "chance"--in that the image can change at any time because of motion beyond the plate--I lost the theme that manipulations of plates had to yield stable, scannable stages. I might get "objectively existing" sentences and derivations with no relation to what the player is able to scan. The image-to-scan correlation is lost. To be "fair" to the machine, one would have to stipulate that the total image does not change during a scan. Then the only interesting feature would be the reasons why the image may change from one scan to the next.
The version of the piece which assumes visual inspection and manual manipulation of the plate is far more credible. Vision can register the image in a snap. If what is at the hole changes continuously, the variation can be registered. The issues which remains is how the moments of the variation shall be intended. Evidently the situation so far defined is isomorphic with a sweep of one's hand. Formally, the series of sentences could be either discrete or smeared. A series can be finished even though it is smeared: you claim to have seen a succession of sentences independently of the issue of perceptually isolating them.
Indeed, is direct implication well-defined in the smeary case? Does the notion of adjacent events apply to a perceptual continuum?
What is notable is the possibility of arguing that objectively, you have perceived an ordered series of moments even though you cannot say how long a moment is or what the cardinality of the series is, even though you cannot intend adjacent moments. The promised indeterminacy.
Another possibility is to arrange for a Necker cube to be visible through the hole. Then the objective situation and the audioplayer are neatly split from the human subject. The subject sees direct implications where objectively there aren't any. In fact, for the human subject, subjective and smeary objective implications can now be mixed.
In 1961 I had stroboscopic complications of this piece; I have removed them to an appendix.
Here the display is a rectangular sheet of photographic print paper having a disc of unmarked newsprint pasted in the center of the emulsion side. What I wanted was differential darkening on exposure to light, and any coatings which produced appropriate results would be acceptable.
A sentence is the display so long as it nowhere becomes darker (relative
to the threshhold of the detector).
The axiom is the newly exposed display. (One axiom per display.)
The continuum is the series of sentences which results as the display is exposed to light over the span of its physical existence.
Comment: Within the intent of the piece, when a sentence disappears, it is absolutely irrecoverable.
In 1982, I defined indirect implication. Given two sentences A and B which are identified (not claimed to be next in succession), A implies B if B is later than A in the continuum.
If we treat this as a piece for the audioplayer, for which it was intended, then the joke is that the light by which the device scans the plate changes what is recorded. Again, what is unsatisfactory is that there is a continuous turnover of sentences all out of proportion to the half-hour duration of a scan.
Again, the piece is more credible if it is apprehended visually. The light by which it is seen changes the image. The criterion of whether the image has become darker is strictly perceptual. We have the issue of whether "next different" is well-defined. However, the issue has a different focus from that for a sweeping hand. The darkening plate may be a slowly varying observable. (Such a process does not preserve the transitivity of identity.) Seemingly one gets the same effect with the cut surface of an apple.
Arguably, a "proof" could be as well-defined as in Teseqs. Simply stare at the plate from the time it is first exposed until it darkens to some chosen degree; then you have observed a proof of that last state of the plate.
A sentence is the [figure 2] (perceptual illusion) so long as the apparent length-ratio between the vertical and the horizontal segment does not change. That apparent ratio is called the "associated ratio."
Here the token arises in an active coupling between a material token and the subject's subjectivity.
Sentence A directly implies sentence B if the associated ratio of B is next smaller than that of A, of all the sentences you ever see. (Implication is now unrelated to temporal continuity of sentences.)
Comment: The change in associated ratio could be discrete or continuous. No guarantee of either.
Comment: When we see that there are different sentences, then it is evident that the constitution of the token is in part subjective, and inescapably so. This feature is isolated and strengthened in SPV.
The axiom is defined as the first sentence one sees. But now this is not trivial. The first sentence you see may not have the greatest associated ratio, so that there could be temporally subsequent sentences which are excluded from all deductions.
A proof is a terminating series of sentences of which the first is the axiom and each of the others is directly implied by its predecessor in the series. But now the order of sentences in the series may not at all be temporal.
A theorem is the terminal sentence of a proof.
Comment: The ability to discern either a direct implication or a proof now becomes heavily dependent on subjective memory and intertemporal comparisons. You have to be a virtuoso to discern a configuration which forms a proof. The 1961 calculi probably required feats of perception and memory which were utterly impracticable. It was the originality of the intent that was important.
When you look at a small bright light through glass fogged by breath, you see a rainbow halo or parhelion, which shrinks. The only aspect of the halo that seems to matter for this piece (structurally) is your sight of a shrinking disc with a distinct outer annulus. To intend a sentence, the subject has to fix a point in the space of the visual field, and register the parhelion's process at that point until the parhelion has shrunk inside the point.
The subject is asked to select a radius visually, and to focus on the portion of the radius which lies in the outer annulus at a given instant. That portion of the radius is called an outer segment.
A (self-same) sentence consists of the changing colors at a chosen point in the parhelion for as long as the point is in the parhelion.
The rainbow is the "arty aspect." The color-changes are an embellishment in that they do not structure derivations.
"Innperseqs" is the first concept-art calculus where a direct implication has more than one sentence as "antecedent"; and where proof is not monolinear. Sentences may persist long enough for several implications to be discerned, but eventually sentences disappear one by one.
There is a display of possible syntactical objects or sentences, and again, you have to be a virtuoso to discern a configuration which forms a proof.
Sentences A1, ..., An directly imply B if and only if at some instant all the sentences are on an "outer segment," and B is the inner endpoint of that outer segment.
An axiom is any sentence which is in the initial outer ring (before the parhelion contracts) and which is not an inner endpoint.
A proof is a finite sequence of finite sequences of sentences on one radius, satisfying the following:
1. The members of the first sequence are all axioms.
2. With respect to the non-first sequences:
a. The first member (of each sequence) is implied by the non-first members of the preceding sequence. (It thus has the role of a lemma.)
b. The remaining members (of each sequence) are axioms or first members of preceding sequences.
3. All first members of sequences other than the last two subsequently appear as non-first members. (Explanation: Lemmas are used in deductions after they are established: non-redundancy of lemmas. But there is an imperfection in that the last lemma is necessarily wasted.
4. No sentence appears as as a non-first member more than once. (An axiom or lemma can only be used once in deduction.)
5. The last sequence has one member.
A theorem is the member of the last sequence of a proof.
Comment: Various levels of indeterminacy are possible, because the subject may not have the virtuosity to discern a complete innperseq.
Wider possibilities of these calculi
a. Give the calculi a semantics. The best example so far is the aforementioned SPV. In Blueprint for a Higher Civilization, the Necker cube has the double meaning of an affirmation/negation operator. Then write self-reference propositions referring to the factors affecting the alternation of meanings itself.
b. Take advantage of the fact that these calculi join the formal to the "material" (physical, perceptual, subjective). Possibly this could be an appropriate applied mathematics that took smeary phenomena as elemental.
In 1982, I underlined these features for a definite reason. I had just written a paper called "Stagnationism, and Options of Revolution, in Physics--Part I." In the Appendix to that paper, I tried to go field theory one better by proposing a unified field theory which relied on and perpetuated received field theory; while deliberately exploiting the blurring of the dividing line between the substantial event and the fiction or manner of speaking. I hypothesized that the state selected as the vacuum was arbitrary; I hypothesized a principle of "invariance under content transformations with compensating realism transformations." I wanted to extract reputable physics from a satire. In the present essay, I raised the ante even more by suggesting that concept art could provide an original kind of formalism for such a theory. I called for a nondiscrete, smeary formal lnguage (tokenetics/syntax) as the formalism. ("Stagnationism," Part I, Appendix, typescript pp. 23-24.) This is the furthest I ever went in suggesting that concept art should shoulder the scientific role of mathematics. I return to this below.
We are back to the topic of foundations of applied mathematics: how can forms be linked to perceptions, experiences?
Perhaps this means a circular semantics in which the material notations become the referents of the calculus (the tokenetics becomes the universe of interpretation). Then there would only be applied mathematics?
Defining "motion" (physical motion) as continuous monolinear deduction?
I don't claim that the 1961 pieces are the best possible uncanny calculi. They are only preparatory illustrations. They should be studied to train one to see possibilities which otherwise would go unnoticed. Today's logicians don't want cases like these to be desirable or important. Definitions of logical systems in contemporary Western culture gerrymander the possibilities precisely in support of the ideology witlessly proclaimed by Carnap: in which proof theory is independent of notation and is a tautological science, etc. etc. "The reader," "the self," is a disregarded component with a punctiform structure--like the concept art sentences. The 1961 concept art made the point that one doesn't have to do it Carnap's way.
Stronger uncanny calculi will arise by being custom-defined for definite meta-technological problems.
Why did I suggest defining motion as continuous monolinear deduction? As a pointer to some of the following possibilities. Study motion not with a logic/mathematics of discrete elements and operations, but with one in which smeary phenomena are elemental in the syntax. Study the real world not only by making the real world the universe of interpretation of a tautological logic/mathematics, but by putting the real world in the syntax. The real world that is going to be injected in the syntax is not the mechanomorphic real world which analogizes to classical computability, but a real world of motion, sensuous discernablility, and subjectivity.
Obviously the subject-matter to which these methods will be suited will not lie inside the disciplinary boundaries of either physics or psychology.
An authoritative corroboration of the agenda here is provided by Gödel's paper on Carnap's view of mathematics. Gödel worked on the paper from 1953 to 1959, and then refused to permit its publication. As he said, the issue was a basic problem in philosophy, that of the objective reality of concepts. Gödel feared to declare himself publicly on the issue, as well he might have--one year before I wrote the first draft of "Philosophy Proper," two years before I wrote "Concept Art." We know about Gödel's paper because of the posthumous publication of drafts in Volume III of his Collected Works in 1995.
Rudolf Carnap, The Logical Syntax of Language (tr. 1959)
Stephen Kleene, Mathematical Logic (1967)
K.G. Binmore, The Foundations of Analysis, Vol. 1 (1980)
Paul Lorenzen, "Methodical Thinking," Ratio, June 1965
Paul Lorenzen, Constructive Philosophy (1987)
Kurt Gödel, "Russell's Mathematical Logic," The Philosophy of Bertrand Russell, ed. Paul A. Schilpp (1944)
F.P. Ramsey, The Foundations of Mathematics (1931)
Michael Dummett, Elements of Intuitionism (1977)
L.E.J. Brouwer, "On the Domains of Definition of Functions," From Frege to Gödel, ed. Jean Van Heijenoort (1967)
E.T. Whittaker & G.N. Watson, A Course of Modern Analysis (1950), p. 3 QA 401.W6
Karl Menger, Selected Papers (1979)
Emil Post in The Undecidable, ed. Martin Davis (1964)
Alfred Tarski, Logic, Semantics, Metamathematics (1956)
Hans Reichenbach, Elements of Symbolic Logic, Ch. VII
C.F. von Weizsäcker, The Unity of Nature
Henry Flynt, "Concept Art," An Anthology, ed. La Monte Young (1963; 1970)
Henry Flynt, Blueprint for a Higher Civilization (1975)
Henry Flynt, In Io # 41: Being = Space X Action (1989)
Recent Essays on Truth and the Liar Paradox, ed. Robert L. Martin, p. 302, fn. 22
Jan Mycielski, "New Set-Theoretic Axioms Derived from a Lean Metamathematics," Journal of Symbolic Logic, March 1995
Appendix: Elaborations of Teseqs
In 1961, I made complications:
- blackouts too short to interrupt the apparent continuity of the visible image;
- blackouts long enough to separate sentences;
- movement of the plate which produces blurs.
I pondered killing the light, jerking the plate to one side, and restoring the light--all in a fraction of a second. The background in front of which the plate was displayed was to be uniform. (E.g. a white wall.) The sentence would endure while it jerked from one location to another. I was throwing the burden on the trait of visual perception which makes the cinema possible. Persistence of the same sentence would be maintained while the sentence twitched from one location to another.