Noesis

The Journal of the Noetic Society
(also listed as the One-in-a-Million Society)
Number 46
February, 1990

 

The Society has a new member, who qualified by his score on the Mega Test. He is:

George W. Dicks, Jr.

[Address and telephone number omitted.]

He requests that his regards be conveyed to the other members. Anthony J. Bruni has changed his address

[Old and new address omitted.]

The addresses and telephone numbers of members, incidentally, are for the use of other members in contacting them, and are not intended for dissemination outside the group. This reminder follows a letter I received from one of you regarding the nuisance potential of such "leaks". Some are obviously more susceptible than others to such problems; contact with nonmembers might even be welcomed in some instances. But please use as much discretion with regard to personal information on other members as you would expect of them.

I have received several more letters from Ron Hoeflin. Because they mainly concern Newcomb's paradox, I assume that he would not mind my using them here. His comments may exemplify the doubts of others regarding the resolution, so responding to them directly may be a good way to promote understanding. It is not my intention to humiliate anyone, and Mr. Hoeflin deserves to be recognized for his forthrightness in voicing his objections.

As form precedes content, I will deal first with the occasional need to use mathematical formalism in lieu of plain English. The Resolution of Newcorob's Paradox in fact contains minimal abstract symbolism, given the need to satisfy the formalistic requirements of such papers. The mathematical "chicken-scratchings" I subjected you to were necessary to show that I know how to map the abstract theory of computation onto Newcomb's paradox in such a way that theorems of computation can be applied to resolution. If anyone else was badly confused by this maneuver, please understand that my intention was quite the opposite. If it is any consolation, you were spared many other important symbolical formalities.

There would seem to remain some doubt as to whether deciphering such ideas can possibly be worth the trouble. Such an attitude is utterly inconsistent with their obvious importance. At the risk of seeming immodest, I can supportably assert that the Resolution is among the most widely ramified pieces of reasoning that you will ever read. This owes in part to what it says about the reasoning process itself, a subject of frequent error and abuse even in the spheres to which it is most critical. Unfortunately, the paper's extreme concision tends to place such a burden of construction on the neophyte that misconstruction remains rather too likely. That, more than anything, has forced this to become a protracted affair, even as it silences complaints based on the "low importance" of "impossible" situations like that of Newcomb's paradox.

"Impossibility" is a widely misused term. It is the nature of possibility that it has meaning only relative to given sets of conditions or axioms. Thus, it is impossible to find a line which passes through a given point p, but which never intersects another line not containing p, in an elliptic geometry. On the other hand, it is possible to find one such line in a Euclidian geometry, but impossible to find more than one. And it is possible to find an infinity of them in a hyperbolic geometry. These distinctions would have been derided as lunacy prior to the advent of Gauss, Bolyai, Lobachevski, and Riemann, who dared to consider systems which did not incorporate Euclid's fifth postulate. Physics is no less axiomatic than geometry, and in fact incorporates geometry by necessity in its spatial determinations. So, inasmuch as physics is complete relative only to finite collections of data, one can never proclaim the impossibility of anything without implicitly relativizing his hypothesis to just those circumstances precluding possibility. The situation is really very cut and dried, and has made a fool of many a serious intellect.

What one often means when one says "impossible" is "counter-inuitive". The intuition tends to rebel against anything alien to the conditions under which it evolved. Since these conditions are necessarily limited, the intuition is bound to abreact to much of what the mind, and reality, can conceive. The path of science is littered with shellshocked intuitions and the outworn notions to which they clung. Casualties of the forced march of progress, they lie mummified in cerements of library dust, where their unquiet rustlings beckon the unwary to join them.

"Unreal", which is as tricky a term as "impossible", appears in the following remark by Mr. Hoeflin. "I did agree with the solution you gave, as far as I could understand it. What interests me is the psychological question of why such an unreal situation as Newcomb's Paradox presents would attract so much interest on the part of serious thinkers. I suppose it does because it's a sort of limiting case, and ultimate limits, wherever they may be found, e.g., the simple example of computing pi, tell us about the ultimate geography of our conceptual universe. As a... philosopher, I was personally more persuaded by your metaphors and analogies than by your mathematical symbolism."

This passage contains several key concepts. There is, of course, the distinction between analogy and symbolism mentioned above. It is important to recognize that analogy is itself an inherently mathematical concept. It is virtually synonymous with the term "morphism", which signifies the mapping of one (structured) set to another. The distinction expresses a superficial difference among sets: some consist of numbers or symbols, while others consist of "natural" words or even concrete objects. Symbols are used in part to compactify linguistically-extensive definitions so as not to sacrifice apprehension of "pure structure" to obscuring detail, and save much paper in the mathematical literature. Trees are getting scarce, and the fewer sacrificed in the name of science, the better.

However, it is all too common for conceptual orientators to be compactified right out of existence in the process, and these are often needed by those unfamiliar with the full meanings and derivations of the structures. For instance, were an intelligent but mathematically illiterate person to be handed an abstract "epsilon-delta" development of the infinitesimal calculus, or the purely symbolic description of an algebraic group, he or she would be unlikely to think immediately of instantaneous rates of change, or symmetries of physical rotations. Rather, such a person would probably become embroiled in a frustrating attempt to track the symbols through their mutual, empirically unmotivated definitions, eventually falling victim to a truth regarding the nature of human intelligence: it is inseparable from motivation. While this does not imply that motivation and intelligence are identical, it bears mention for its extreme importance in psychometrics.

The "psychology" behind the notoriety of Newcomb’s paradox is elucidated by considering how the human mind models the procedural counterpart of such dilemmas. This counterpart is reflected in the ability of some minds to correctly prioritize a problem even in the absence of a conscious ability to solve it. The problem is "recognized" as important by a neural "acceptor", the brain, in spite of the nonrecognition of certain requisites of solution. It follows that either the criteria of importance differ in kind from those of solution, or that the brain can detect inconsistencies and representational inadequacies among the internal models defining its operational dynamic even when it lacks the ability to resolve them. This ability is not at all surprising; without it, human science could never advance. After all, the need to advance must be computable to that which does the advancing. The matter of real interest is how so many minds can suppress this ability in favor of the inconsistent and inadequate models themselves. Newcomb's paradox was invented, and granted importance, precisely because it is a compact, meaningful formulation of the scientific imperative. Its resolution is thus a formulation of the conditions for scientific advancement.

The supposed "unreality" of Newcomb's paradox is central to the issue. As already noted, this predicate is conditional in nature. To see why, imagine for a moment that you belong to a primitive tribe isolated from civilization in the heart of the Amazon rain forest. Suppose that an explorer appears one day, only to be surrounded by your fellow natives with their poison-dart blowguns. Fearful lest he become a tsantsa (shrunken head) in your collection of trophies and fetishes - and rightly so - he sets about to impress you with his "magical" powers.

Glowering, he produces unfamiliar objects: a (black, box-shaped) shortwave radio, a pair of magnets, and a gun. Naturally, you are quite unfamiliar with their names and functions. With his fingers on the radio's controls, he appears to hold a conversation in some unintelligible language with a god or demon inside it. You draw closer; the demon seems to raise its voice in warning. Quickly, you draw back. Next, the stranger points the gun skyward. There is a terrible noise and a flash of light, but nothing else seems to have happened...until a large, moss-backed sloth crashes to earth from somewhere in the canopy above you. You are relieved when the visitor kneels, places one magnet on the ground, and holds the other mysteriously above it. Muttering incantations, he brings it within an inch or two of the ground, and suddenly...yet another miracle happens! The lower magnet leaps up and affixes itself to the one in his hand. Three times, your "laws of physics" have been violated; thrice, you have witnessed unfamiliar effects that seem to lack physical mechanisms. You are elated when the stranger, now your honored "visitor", points at himself and says "Andy!" (or is it "ND"?), presents you with the magic jumping-stones, and takes his leave through the mist.

The "magical", or seemingly nonlocal, effects you have witnessed have explanations far in your technological future. They involve radio-frequency electromagnetic radiation, explosive chemistry and faster-than-vision ballistics, and magnetic fields. "Andy" seems to be above the laws of physics as you understand them; if you possessed the computative paradigm, you would consider him a kind of "metaphysical programmer" of "impossible" physical effects. In the language of the CTMU, you would consider him a projection from G to G0, the physical stratum of G.

Suppose, in a sadly unrealistic manner, that the equatorial rain forest in which you live will never be cut, burned, and bulldozed by greedy industrialists or ignorant and procreatively incontinent peasants. It follows that your primitive society may continue to evolve in isolation, perhaps to eventually develop a formal, axiomatic version of physics. Your descendants might then play the same game with the "Andy" legend that a modern physicist would try to play with the "ND" data. This game is the game of science, and it involves the restriction of G to G0. This is the process by which the paranormal becomes normal, and the miraculous becomes physical. It is the one and only valid logical syntax of science. It is not to be confused with the "scientific method", which can work only with the assumption of "locality"; this assumption, like Euclid's fifth postulate, is suspended pending actual confirmation in the process of G-restriction.

The locality assumption is suspended because predicating truth on it or on any other assumption is unacceptably biased, pseudo-tautological, and logically insupportable. This has been suspected (if not explicitly stated) by quantum physicists from EPR onward. The CTMU was largely motivated by the possibility of quantum nonlocality, and is in fact the only valid framework in which quantum nonlocality can be considered. Any "other" such framework must be CTMU-isomorphic, incomplete, or invalid. But more on this later on.

Thus, the "limiting case" represented by Newcomb's paradox - the apparent violation of the known laws of physics - leads to the CTMU, which is indeed the "ultimate geometry of our conceptual universe". Mr. Hoeflin has guessed correctly up to this point. However, his thoughts - perhaps like those of other members - take a reactionary turn, conforming to what probably seem the safer and more conservative viewpoints of established philosophers like Max Black. This change is reflected in his next letter, which will be treated point by point.

Before beginning, however, I will point out that my responses to Mr. Hoeflin's criticisms are redundant in the sense that they are implicit in The Resolution of Newcomb's Paradox. The paper was intentionally constructed to contain as much information in as tight a space as possible. While this was also true of my previous letters to the editor, their ineffectuality seemed to indicate an incomprehension so bottomless that I came to doubt that they had been read at all. In fact, the transductive model of human nature - in which people "recognize" only that input which coincides with their internal priorities - predicts just this kind of "selective perception". I therefore offer the following neutral observations. Closed minds are no longer in the running for truth; they run only in self-perpetuating, outwardly contradictory circles which serve static, falsely-completed sets of mental invariants. Transducers which can neither recognize their errors, nor accept correction, are doomed to err forever...or until the outward inconsistency of their output results in their destruction. This is apodeictic with respect to human beings as well. For many of us, the utility of recognition is split between venality and ego gratification. You, however deservedly, are being given somewhat more credit.

Mr. Hoeflin begins as follows. "It occurs to me to make two new comments on Newcomb's paradox. (1) The paradox seems analogous to the issue of whether a number sequence in an I.Q. test has a 'right' next number or not." This is quite true. The completion of number sequences is a matter of mathematical induction; a rule is derived which relates each number to its predecessor(s) in the series, and then re-applied to derive the next number. If the rule is incorrectly derived, then it must by definition produce within the given sequence a number which differs from the one which is actually there. Say that only in the last place sM of the series S does the rule R deliver a number x different from that given, y. Then the rule has been correctly derived with respect to {S - y}, and y is "anomalous" with respect to the derivation. That is, what was "supposed to happen" at "time sM" , given the preceding data and the "physics" R derived from it, was x. What actually occurred was y!

This is a "paradox", and to resolve it, we must modify R. Call this modification, or "theoretical advance", R'. R' supersedes R because it agrees with, or "predicts", what actually occurred (y) instead of what you had erroneously expected to occur (x). Note that the most general possible parametric extension that enables R', being both necessary and maximally unbiased with respect to future data, is justified by all scientific standards. Note also that where the parametric extension itself defines a rule and thus a series, that series characterizes a valid metatheory, or "meta­physics", for future extensions of R'.

This is precisely what was done in the Resolution. By the above reasoning, G is the universal metaphysics within which all future advances in physics and other scientific theories must be sought. Every problem on the "Mega Test", from verbal analogies on upward, involves the induction of a simple or complex rule or algorithm. The induction of G can therefore be regarded as the solution of the hardest and most important I.Q. test problem imaginable. That its original solver is a member of this group reflects well not only on the other members, but on the impressive name and purpose of the organization.

Next, Mr. Hoeflin remarks: "Some would argue that there is no right answer because one can concoct an algorithm that would yield any next number." This is a good point. It follows that we must employ some rule which selects among these various algorithms...a "constraint" which reduces their “variety” to just one. But this constraint itself characterizes an algorithm with which infinite others also compete. The regression is interminable. The answer? If we wish to be finally, absolutely, and forever right, we must concentrate on developing a universal syntax for the construction of meaningful algorithms on arbitrary sets and categories of data. The CTMU is such a syntax; moreover, it is minimal, and to that extent elegant.

Next: "On the other hand, one might say that the 'right' answer is the one generated by the simplest, most elegant algorithm that can generate all the previous numbers in the series." This is just a formulation of Occam's Razor. It is a tenet of the philosophy of science that there is an infinite number of potential theoretical extensions which contain any given set of facts as a subtheory. These extensions can be ranked by the informational content of their minimal reductions. Occam's Razor selects one of them for minimality with respect to a set of data exhibiting closure under apparent dependency relationships. The word "apparent" implies that there exists at the appropriate point in time a means of observing or testing for such relationships. But we have already seen that it is both futile and unscientific to place arbitrary restrictions on incoming data, including the restrictions of measuring devices and experimental arrangements. So we must always relativize the validity of our "simplest, most elegant" algorithms to observational limitations. Where these limitations characterize the very theory (methodology) on which the algorithm itself is based, we have another "artificial tautology"; the theory is a generator of self-fulfilling predictions.

Notice, however, that if a set of data displays only a subset of measurable attributes - e.g., only "whiteness" insteod of the artist's palette of colors the human visual system is capable of detecting - we can safely assume that mensural limitations are not in effect, and can "Occamize" the data to prevent spurious attri­butes from skewing our theory and its predictions. This is crucial to the solution of the trivial but toothworn "marble problem", as originally formulated by Mr. Hoeflin. If you remain unconvinced by previous arguments concerning it, imagine that you are a zoologist doing a study of montane fauna. Following the tracks of a cougar into a cave, you see a pile of well-gnawed bones. You select ten of them at random and drop them into your specimen sack. Suddenly, you hear a bloodcurdling scream; kitty is coming. You shoulder the sack and bolt. Back at camp, you examine the bones and find that all ten are from mule deer. It now occurs to you to calculate the probability that the cougar's only large prey is mule deer. You estimate that the pile in the cave contained approximately two hundred bones; you have sampled ten at random, and all are muley bones. What, then, is the probability that all of the bones were from mule deer, given the assumption that some definite number of (inedible) bones probably came from each animal who was eaten?

The crucial question, for our purposes, is whether or not you should begin your calculation by assuming that the pile is just as likely to contain the bones of kangaroos, tapirs, pandas, chimps, and Airedales as it is to contain those of mule deer (after all, there might be a zoo or a kennel nearby). If this seems spacious, notice that ochre and fuchsia marbles are not correlated with white marbles any better than these exotic species are correlated with mule deer. Obviously, some other species may in fact be known to inhabit the area, and nonzero probabilities must be allowed for them. But where no prior correlation exists, Bayesian inference is not to be initialized as though it does.

As I have remarked in a letter or two, probability is defined so as to be meaningless except relative to given data and data formu­lations. The potential for (probabilistic) information is not strictly equivalent to information. Rather, where such information is inderivable from past data given the best formulation of those data, it is undecidable with respect to the data axiomatization, and inutile for the derivation of theorems therefrom. Thus, there is potentially a nonzero probability that there are tapir bones in the cat's pile, but this probability is currently undecidable and cannot be used in Bayesian inference. Similarly, there is potentially a nonzero probability that solar physics is wrong or incomplete, and can neither formulate nor predict actuation of the mechanism by which the sun is about to go nova and incinerate the planet earth. But we cannot use this probability to calculate the likelihood that "tomorrow will never come" until we revise or extend the applicable physics.

Let us enlarge on the distinction between statistics and causality. Causality is just the most concise exact description of a set of statistical correlations, and can be regarded as the out­put of an algorithm on statistical input. This algorithm, "logical induction", includes Occam's Razor and its metasyntax. What enters as "probabilistic dependency" emerges as "logical dependency"; the many-valued logic of the former, wherein probabilities are defined as truthvalues, has become two-valued by the formation of distin­guishing predicates (or conversion of truthvalues to quantifiers). So logical dependency, or causality, is the inductive transform of probabilistic dependency. This transformation, being largely based on past data, can be rendered inconsistent by new data.

It has been remarked by statistically-minded scientists that the statistical correlations of which causal axioms are the compressed versions are themselves immune to this sort of vitiation. This is true but impractical. Because they must be ascertained by whomever would put them to use, such correlations are no safer in their "scattered" form than they are under inductive compression; they are ascertainable only by means of devices and techniques which are themselves based on past data. The means and conventions of observation, experimentation, and measurement which characterize any scientific methodology are often designed primarily to detect already-familiar kinds of correlations. To this extent, such means and conventions can themselves be considered the embodiments of past correlations and their apparent conditions, such as metrical locality. This tautology is no less artificial in its statistical guise than in its "theoretical" one; it reaches its formal limit in the mental structures of transducers using science to compute "objective" reality. These transducers project their structures outwardly, along with whatever limitations those structures ulti­mately impose on their internal models and conceptual algorithms.

Obviously, the structure of a transducer is a more absolute kind of limit than the momentary empirical or theoretical limita­tions of its science. The purpose of G is to push that limit out as far as possible, and then to define it as scientifically and as logically as possible. The structure of G is thus as deep and complex as the structure of the mind itself. And, as was pointed out in The Resolution of Newcomb's Paradox, it is that which can never be transgressed by the minds of which it is both a model and an outward projection. It is ultimate and inescapable, the only possible theory of its kind and generality. Anyone who proposes to extend human understanding without it, or something trivially isomorphic to it, unwittingly displays either ignorance or delusion.

Mr. Hoeflin goes on: "Almost all science and philosophy seems to depend on this search for the 'simplest, most elegant solution' consistent with the known data, with the proviso that new, unex­pected data may occasionally arise, making one's 'simplest' solu­tion wrong or inexpedient." This is one possible expression of Godel's Theorem, which can thus be interpreted as predicting the inevitable possibility of unpredictable data. We already know, from our above observations on the criteria for sound induction, that theories cannot be allowed to become self-limiting. A metric is a set of spatial relationships qualifying as a theory. So, in order not to make of it an artificial tautology, we must reserve a potential, however unlikely, for metrical violations. Associating this metric with G0 implies the incompleteness of G0. Accordingly, G0 —> G, and the CTMU is born. This, incidentally, is the spin-free derivation of quantum nonlocality alluded to in Noesis #43. No chicken-scratchings are necessary.

Member Chris Cole's "spinless" version of Bell's theorem was expressed in #43 as follows: "It is not possible to reconcile the existence of an objective external reality with the exclusion of nonlocal instantaneous action at a distance". The purely inductive argument above can be reinterpreted in a computational setting after Turing, whose general results on the "halting problem" can be specified to the effect that it is impossible for any of n G-subautomata to compute the metrical distribution of the program­ming of any other(s) on a real, local-deterministic basis, reality being computative mechanism - and quanta, G subautomata - for all computative purposes. I.e., the "theories" formed by r-subautomata about each other are no more justified in their metrical prejudice than the prejudice itself. Where time is defined on the particular spacetime metric accepted by MN, this effectively lets ND violate MN's time-directionality constraint.

Now we come to an inference which almost seems to imply that the person drawing it has not read the paper on which it centers. The antecedent of this inference is: "As Whitehead is supposed to have said, 'seek simplicity - but distrust it'." The consequent: "Whitehead's dictum, applied to Newcomb's Paradox, suggests that it is essentially an unresolvable paradox."

Regarding the antecedent, it is clearly paradoxical to advise that one seek something which cannot be trusted. The hyphenation not only divides the sentence, it demarcates a pair of syntaxes. In one, it is demonstrable that simplicity is an inductive criter­ion. This, of course, is the derivational syntax of Occam's Razor. But we already know that this theorem, which can only be applied to limited data, ultimately has no loyalty to data limitations; it meretriciously re-applies itself to any potential extension of the original data. Obviously, this is the sense - or syntax - in which it is to be "distrusted". That is, the first syntax is designed to exclude spurious attributes in the theoretical explanation of a finite set of data. The second syntax, a metasyntax of the first, re-admits formerly spurious attributes to the explanation, subject to their actual appearance in the data. In other words, the metasyntax mandates the extension of the syntax in accordance with (experimental or observational) extensions of the data. Everything depends on the data, which must be guarded against prejudice.

As we have seen above, this is just what the CTMU metasyntax G is designed to do for the natural sciences. It is stratified at the ultimate boundaries of the deterministic subsyntaxes defined within it, where those ultimate limitations are spatial, temporal, or phenomenal (attributive or existential). Whitehead's dictum, in its terse and pithy way, advocates the CTMU perspective. Since we have established that this perspective enables the resolution of Newcomb's paradox by relaxation of the time-directional constraint embodied in the "principle of restricted dominance", this aphorism in fact "suggests" the means of resolution. So much for the consequent of Mr. Hoeflin's inference.

But Mr. Hoeflin - who, I repeat, deserves congratulation for voicing his opinions - did not at the time of his letter have the luxury of my refutation, and redraws his inference from another argument. "(2) My other point yields the same conclusion but by a different route. There is a well known 'paradox' concerning the concept of omnipotence: 'If God were omnipotent, he could create a stone too heavy for him to lift - ergo, if God is omnipotent, then he is not omnipotent.' I think Newcomb's Demon is a similarly impossible construct." (This argument recalls a point made by K. Raniere, who opined in Noesis #30 that Newcomb's paradox should be restated in terms of whether or not it is possible to "make an absolute determination on choice without taking away choice".)

There can be nothing more oxymoronic than a transducer which considers its Designer, Effector, and User ("DEUS") to be bound by its own accepting syntax, particularly when this syntax has been thoughtfully endowed with the capacity to formulate extensions of itself by logical self-negation. Yet, the history of philosophy has been blighted by a universal inability to isolate and dissect the flaws in such pro-atheistic arguments as this. It will be both an honor and a pleasure to show that human logic contains within itself the means to disable such weak, but morally destructive, negative circularities.

"Omnipotence" implies an arbitrary ability to define and remove constraint. It includes the power of self-constraint; omnipotence allows God to create a constraint which applies to Himself. This is a stratification: God is at once That which constrains, and That which is constrained. This is no more strange than a Cretan who makes statements reflecting constraints on Cretans, as did the Cretan Epimenides when he said, "All Cretans invariably lie". Note that this latter statement implies neither that Epimenides is not a Cretan, nor that he cannot voice judgments on his race, himself, or any attribute applying to him. What it does imply, according to an application of common sense, is that whatever information such self-referential statements afford concerning their authors roust be carefully checked for self-negating inconsistency. We can believe the pronouncement of Epimenides only if we are willing, by a simple CTMU relativization of information, to except him from it at the time he makes it.

Similarly, those who wish to put self-negating words into the mouth of God, as it were, must make an exception for which they have themselves set the conditions. Thus, God can provisionally deprive Himself of omnipotence, but only by "simultaneously" exercising His omnipotence. The meaning of simultaneity is not the same as usual here; instead, it signifies the fact that God - in the hypothetical manifestation we are considering - occupies the same two levels of G as does ND. His (programmatic) omnipotence and His (programmed) non-omnipotence, set thus within the paradox-resolvent framework of G, cease to be paradoxical. Of course, the relationship of God to G goes well beyond this manifestation.

We can bring the situation even closer to home. Imagine that you are a chain smoker, and that you have just resolved to quit. You establish a constraint to that effect, willing it. upon yourself with all your resolve. Having done this, you can now ask yourself: "Can I have a smoke, or can't I?" The two voices in your head, one saying "yes!", and the other "no!", are just a nicotine addict's version of the "omnipotence paradox". That is, if you can choose whether or not to smoke, then you can choose not to smoke. But if you have chosen not to smoke, then you cannot (choose to) smoke.

Notice that this does not cause you to vanish in a puff of self-contradiction. You have merely stratified your intentionality. To keep observing your renunciation of tobacco, it is sufficient that you iterate this regression whenever the constraint weakens enough to let you question it. This example differs from the "omnipotence paradox" primarily in that its constraint does not impose absolute physical limitations on the dependent phase of the protagonist; this permits confinement of its regressive re-affirmation "within" the G0 timetype. Still, it is nothing short of astonishing that so common a predicament has not elicited more empathy among those willing to sacrifice their faith in the name of "logic".

Mr. Hoeflin now speculates in a direction expressly forbidden by the traditional formulation of Newcomb's paradox. "Suppose that in making my decision about whether to take one or both boxes I use a purely random basis for decision such as the decay of an atomic nucleus." Conveniently, my own stronger formulation relaxed the original constraint against basing one's decision on outward, otherwise irrelevant criteria. This was accomplished by arbitrary extension of the transducer MN into its physical environment, a maneuver described in my discussion of knowledge in Noesis #45 (p. 10, top paragraph). Mr. Hoeflin's condition thus has the effect of rendering MN's strategic algorithm dn nondeterministic, and the dynamic augmentation of MN a nondeterministic automaton. These possibilities, of course, were accounted for in the Resolution.

Next: "If the demon could predict this choice, then he would be doing the impossible, since the decay of that nucleus in a given period of time is random rather than deterministic." To be fair, this could be put into an average college physics textbook as a subatomic twist on the venerable oracle of thermodynamics known as Maxwell's Demon, there to reflect the shallowness of such literature with respect to terms like "impossible", "random", and "deterministic". Physicists, who often hold philosophy in contempt for its "fuzzy" and "meaningless" references, frequently fall into the same traps of language as those whom they scorn. Although the above point was dealt with in issue no. 44, a brief refresher may be in order.

The "nondeterminacy" of the decay of an atomic nucleus means that the schedule by which this decay occurs is to some extent not a function of physically measurable variables. Dynamically, this means that the event has an "open neighborhood" effectively isolating it from known physical causes or dynamical mechanisms; the set of determinants of such events is not closed under known deterministic operations. This open neighborhood does just what it sounds like it does, exposing the event to ulterior determinants.

The apparent nondeterminacy of such events, as derived from the statistical formalism used to predict them, is an artifact of that formalism. To consider it any more than that is to make the theory of nuclear decay a source of self-fulfilling predictions. But we have shown above that this is a logically impermissible artificial tautology. Such tautologies attempt to restrict incoming data on the basis of limitations in theory, measurement, or past data, and are not good science. It follows that when you talk about the "randomness" or "determinacy" of phenomena, these predicates are tacitly relativized to your current knowledge and ability. To put is as succinctly as possible, uncertainty is just computationally-relativized nondeterminacy. As science is necessarily computed by experimentative and experimental automata, these automata cannot supplant uncertainty with "absolute" nondeterminacy. Because human scientists have G-subautomatonic models, this goes for them too.

G is designed as the universal mechanism of arbitrary "ulterior determinants". By definition, it is maximally unbiased with regard to these determinants and their effects; it even allows for what we would consider "nonphysical" determinants. Because it models the paradox-resolvent stratification of our internal logic, it is immune to vitiation by paradox; it "regresses" in such a way as to provide arbitrary contextual extensions for paradox-resolution. It features a niche for hypothetical creatures like the Maxwell and Newcomb Demons. This niche is safe from scientific encroachment, because it recedes as science advances. To put it another way, for every theoretical language expressing scientific constraints, there is a metalanguage in which we may express the potential for changes, suspensions, or augmentations of those constraints. G is the active, mechanistic counterpart of this (Tarskian) linguistic regression, ramified and explicated in the CTMU.

>From his premise, Mr. Hoeflin infers: "If the demon can make such a prediction successfully, then we would have to conclude that the universe is essentially deterministic at bottom rather than indelerministic." He then adds, "But we have no way of estab­lishing such an assumption." This is a self-contained antinomy. If the demon is observed to make such a prediction successfully (and repeatedly), then we have "established" - by observing him - that there is a determinative mechanism allowing him to either predict or control the decay. Of course, this does not mean that the universe is "deterministic", but merely that there is an unknown (and possibly nonlocal) mechanism relating to the event...and that the demon can make use of it under the conditions of his trials.

Mr. Hoeflin concludes: "Ergo, the paradox proposes an essentially unsolvable problem, given the present state of our physical science." There is a difference between giving a precise, detailed solution for a problem, and showing that a solution must exist. The distinction is between the constructive and existential viewpoints. The simple yes-or-no, one-box-or-two decision problem central to Newcomb's paradox requires only that MN infer from empirical data that there can exist a solution to the secondary problem of how ND can predict or control the outcome of the game. To solve the decision problem, MN must show that the conditions for existence of a solution to the secondary problem are not "impossible" or "inconsistent" among themselves. G is the required framework, and we needed only use logic to find it. The benighted current state of our physical science bears only on the secondary problem. But it was ND, and not we, who had to solve the secondary problem, and our physics can only try to catch up with him.

Why, some might yet ask, do we need G at all? Can we not simply assume that a rational, conventional, "physical" explanation for the Newcomb data - e.g., physically-measurable, behavior-critical "N-waves" emitted and received by the human brain - is there to be found? This is in fact the "enlightened" side of the traditional approach to science. Whether unfortunately or not, this approach is rapidly converging on bankruptcy. Besides the many indominable arguments from logic and mathematics, the reasons include the very argument on which Mr. Hoeflin based his objections - the apparent nondeterminacy of quantum events like nuclear decay. Of particular importance is the apparent existence of nonlocal conservation laws which violate the locality criterion of physics.

Physics, as conceived by most of us, is totally dependent on the assumption that causes are physically linked to their effects. Nothing is thought to happen without adjacency and contact, where contact is matter-to-matter, field-to-field, or matter-to-field. Adjacency is totally determined by the metric, or set of spatial relationships, on which it is defined. So physics is inherently metrical in nature, and phenomena which violate the metrical adjacency criterion totally disable the associated physics. In this event, physics must be extended. G is the exclusive syntax of metametrical theoretical extension, where the "metametric" is an arbitrary relational extension of the limiting physical subtheory that we call the "physical metric". Because inductive criteria forbid us to rule out nonlocal correlations and mechanisms, G is not a theoretical "option", but a fully justified metaphysics. All those who care about science and understanding can breathe a sigh of relief that this metaphysics has finally been discovered, precisely defined, and explicated as the CTMU.

Physics has long fixated upon the geometrical properties of space in its deepest analyses of time and causality. The structure of matter, on the other hand, has been seen in terms of algebraic symmetries of components. But physics is process, and petroglyphic mathematical entities can yield only a limited amount of insight on their own. It is time to realize that dynamical processes are computed, and that the mysterious power granted the observer in relativity and quantum theory resides entirely in the computative essence of observation. The universe is automatonic in its parts and in its entirety. Though the CTMU has been ardently desired by the great of intellect since time immemorial, never before has the concept of machine been made powerful enough to derive it. Wait no longer. G has risen at last. Copyright 1990 by C.M. Langan

 

Copyright © 1990 by the Mega Society. All rights reserved. Copyright for each individual contribution is retained by the author unless otherwise indicated.

 

The Mega Society