Thursday, February 24, 2011

Construction Grammar, Optimality Theory, Harmonic Interaction, Constraint Satisfaction, Situation Semantics, Neutrosophy, Telesophy, Koinosophy

"• Grammar is the system of a language... It’s important to think of grammar as
something that can help you, like a friend.

• Grammar tells the users of a language what choices are possible for the word order of
any sentence to allow that sentence to be clear and sensible - that is, to be
unambiguous.

• ..."prescriptive grammar," a set of "rules" governing the choice of who and whom, the
use of ain’t, and other such matters. Promoted by Jonathan Swift and other literary
figures of the 18th century, this approach to language prescribes the "correct" way to
use language.

• ..."descriptive grammar," which is the study of the ways humans use
systems–particularly syntax and morphology–to communicate
through language.

• Therefore grammar acts as his tool to create meaning."
http://inst.eecs.berkeley.edu/~cs182/sp07/notes/lecture23.gr1.pdf

"At the heart of what shapes Construction Grammar is the following question: what do speakers of a given language have to know and what can they ‘figure out’ on the basis of that knowledge, in order for them to use their language successfully? The appeal of Construction Grammar as a holistic and usage-based framework lies in its commitment to treat all types of expressions as equally central to capturing grammatical patterning (i.e. without assuming that certain forms are more ‘basic’ than others) and in viewing all dimensions of language (syntax, semantics, pragmatics, discourse, morphology, phonology, prosody) as equal contributors to shaping linguistic expressions.

Construction Grammar has now developed into a mature framework, with an established architecture and representation formalism as well as solid cognitive and functional grounding. It is a constraint-based, generative, non-derivational, mono-stratal grammatical model, committed to incorporating the cognitive and interactional foundations of language. It is also inherently tied to a particular model of the ‘semantics of understanding’, known as Frame Semantics, which offers a way of structuring and representing meaning while taking into account the relationship between lexical meaning and grammatical patterning.

The trademark characteristic of Construction Grammar as originally developed consists in the insight that language is a repertoire of more or less complex patterns – CONSTRUCTIONS – that integrate form and meaning in conventionalized and in some aspects non-compositional ways. Form in constructions may refer to any combination of syntactic, morphological, or prosodic patterns and meaning is understood in a broad sense that includes lexical semantics, pragmatics, and discourse structure. A grammar in this view consists of intricate networks of overlapping and complementary patterns that serve as ‘blueprints’ for encoding and decoding linguistic expressions of all types."
http://www.icsi.berkeley.edu/~kay/

"Historically, the notion of construction grammar developed out of the ideas of "global rules" and "transderivational rules" in generative semantics, together with the generative semantic idea of a grammar as a constraint satisfaction system. With the publication of George Lakoff's "Syntactic Amalgams" paper in 1974 (Chicago Linguistics Society, 1974), the idea of transformational derivation became untenable.

CxG was spurred on by the development of Cognitive Semantics, beginning in 1975 and extending through the 1980s. Lakoff's 1977 paper, Linguistic Gestalts (Chicago Linguistic Society, 1977) was an early version of CxG, arguing that the meaning of the whole was not a compositional function of the meaning of the parts put together locally. Instead, he suggested, constructions themselves must have meanings.

CxG was developed in the 1980s by linguists such as Charles Fillmore, Paul Kay, and George Lakoff. CxG was developed in order to handle cases that intrinsically went beyond the capacity of generative grammar.

The earliest study was "There-Constructions," which appeared as Case Study 3 in George Lakoff's Women, Fire, and Dangerous Things.[1] It argued that the meaning of the whole was not a function of the meanings of the parts, that odd grammatical properties of Deictic There-constructions followed from the pragmatic meaning of the construction, and that variations on the central construction could be seen as simple extensions using form-meaning pairs of the central construction.

Fillmore et al.'s (1988) paper on the English let alone construction was a second classic. These two papers propelled cognitive linguists into the study of CxG."
http://en.wikipedia.org/wiki/Construction_grammar

"The notion construction has become indispensable in present-day linguistics and in language studies in general. This volume extends the traditional domain of Construction Grammar (CxG) in several directions, all with a cognitive basis. Addressing a number of issues (such as coercion, discourse patterning, language change), the contributions show how CxG must be part and parcel of cognitively oriented studies of language, including language universals. The volume also gives informative accounts of how the notion construction is developed in approaches that are conceptually close to, and relatively compatible with, CxG: Conceptual Semantics, Word Grammar, Cognitive Grammar, Embodied Construction Grammar, and Radical Construction Grammar."
http://tinyurl.com/45g4qa3

OPTIMALITY THEORY
Constraint Interaction in Generative Grammar

"Everything is possible but not
everything is permitted." -Richard Howard, "The Victor Vanquished"

"It is demonstrated," he said, "that things cannot be otherwise: for, since everything was made for a purpose, everything is necessarily made for the best purpose." -Candide ou l'optimisme. Ch. I.

The basic idea we will explore is that Universal Grammar consists largely of a set of constraints on representational well-formedness, out of which individual grammars are constructed. The representational system we employ, using ideas introduced into generative phonology in the 1970's and 1980's, will be rich enough to support two fundamental classes of constraints: those that assess output configurations per se and those responsible for maintaining the faithful preservation of underlying structures in the output. Departing from the usual view, we do not assume that the constraints in a grammar are mutually consistent, each true of the observable surface or of some level of representation. On the contrary: we assert that the constraints operating in a particular language are highly conflicting and make sharply contrary claims about the well-formedness of most representations. The grammar consists of the constraints together with a general means of resolving
their conflicts. We argue further that this conception is an essential prerequisite for a substantive theory of UG.

It follows that many of the conditions which define a particular grammar are, of necessity, frequently violated in the actual forms of the language. The licit analyses are those which satisfy the conflicting constraint set as well as possible; they constitute the optimal analyses of underlying forms. This, then, is a theory of optimality with respect to a grammatical system rather than of wellformedness
with respect to isolated individual constraints.

The heart of the proposal is a means for precisely determining which analysis of an input best satisfies (or least violates) a set of conflicting conditions. For most inputs, it will be the case that every possible analysis violates many constraints. The grammar rates all these analyses according to how well they satisfy the whole constraint set and produces the analysis at the top of this list as the output. This is the optimal analysis of the given input, and the one assigned to that input by the
grammar. The grammatically well-formed structures are those that are optimal in this sense.

How does a grammar determine which analysis of a given input best satisfies a set of
inconsistent well-formedness conditions? Optimality Theory relies on a conceptually simple but surprisingly rich notion of constraint interaction whereby the satisfaction of one constraint can be designated to take absolute priority over the satisfaction of another. The means that a grammar uses to resolve conflicts is to rank constraints in a strict dominance hierarchy. Each constraint has absolute priority over all the constraints lower in the hierarchy.

Such prioritizing is in fact found with surprising frequency in the literature, typically as a subsidiary remark in the presentation of complex constraints.1 We will show that once the notion of constraint-precedence is brought in from the periphery and foregrounded, it reveals itself to be of remarkably wide generality, the formal engine driving many grammatical interactions. It will follow that much that has been attributed to narrowly specific constructional rules or to highly particularized conditions is actually the responsibility of very general well-formedness constraints. In addition, a diversity of effects, previously understood in terms of the triggering or blocking of rules by constraints (or merely by special conditions), will be seen to emerge from constraint interaction.

Although we do not draw on the formal tools of connectionism in constructing Optimality Theory, we will establish a high-level conceptual rapport between the mode of functioning of grammars and that of certain kinds of connectionist networks: what Smolensky (1983, 1986) has called "Harmony maximization", the passage to an output state with the maximal attainable consistency between constraints bearing on a given input, where the level of consistency is determined exactly by a measure derived from statistical physics. The degree to which a possible analysis of an input satisfies a set of conflicting well-formedness constraints will be referred to as the Harmony of that analysis. We thereby respect the absoluteness of the term "well-formed", avoiding terminological confusion and at the same time emphasizing the abstract relation between Optimality Theory and Harmony-theoretic network analysis. In these terms, a grammar is precisely a means of determining which of a pair of structural descriptions is more harmonic. Via pair-wise comparison of alternative analyses, the grammar imposes a harmonic order on the entire set of possible analyses of a given underlying form. The actual output is the most harmonic analysis of all, the optimal one. A structural description is well-formed if and only if the grammar determines it to be the optimal analysis of the corresponding underlying form.

With an improved understanding of constraint interaction, a far more ambitious goal becomes accessible: to build individual phonologies directly from universal principles of well-formedness. (This is clearly impossible if we imagine that constraints must be surface- or at least level-true.) The goal is to attain a significant increase in the predictiveness and explanatory force of grammatical theory. The conception we pursue can be stated, in its purest form, as follows: Universal Grammar provides a set of highly general constraints. These often conflicting constraints are all operative in individual languages. Languages differ primarily in how they resolve the conflicts: in the way they rank these universal constraints in strict domination hierarchies that determine the circumstances under which constraints are violated. A language-particular grammar is a means of resolving the conflicts among universal constraints.

On this view, Universal Grammar provides not only the formal mechanisms for constructing particular grammars, it also provides the very substance that grammars are built from. Although we shall be entirely concerned in this work with phonology and morphology, we note the implications for syntax and semantics."
http://roa.rutgers.edu/files/537-0802/537-0802-PRINCE-0-0.PDF

The Optimality Theory ~ Harmonic Grammar Connection:
http://uit.no/getfile.php?PageId=874&FileId=187

Can Connectionism Contribute to Syntax? Harmonic Grammar, with an Application
http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-485-90.pdf

Integrating Connectionist and Symbolic Computation for the Theory of Language:
http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-628-92.pdf

Searle, Subsymbolic Functionalism and Synthetic Intelligence:
http://view.samurajdata.se/psview.php?id=8fa2ad05&page=1

Subsymbolic Computation and the Chinese Room:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.6034&rep=rep1&type=pdf

Subsymbolic Computation:
http://consc.net/mindpapers/6.3e

Experiments with Subsymbolic Action Planning with Mobile Robots:
http://cswww.essex.ac.uk/staff/udfn/ftp/aisb03pi.pdf

"Symbolic AI involves programs that represent knowledge, relationships etc. explicitly as a set of symbols. In a computer program, this means a set of variables and functions. These variables have names and their purpose is clear. Each variable represents a particular pieces of knowledge, and each function represents a particular inference rule.

For instance, here is a rule extracted from an expert system:

IF blood_sugar_level > 20 AND gram_negative = TRUE
   THEN disease = "STREPTOCOCCUS"
A program that used this rule would have to have three variables, one called blood_sugar_level, one called gram_negative and one called disease. Even someone with no medical knowledge can see what these variables represent, and they can understand the rule fairly easily.

Sub-symbolic AI involves modelling intelligence not with variables that represent particular pieces of knowledge. Instead, they simulate small areas of the brain - with variables representing brain cells. These cells are "connected" inside the program so that they can pass their signals from one brain cell to another. These connections can be strong or weak, so that a large part of the signal is passed on, or only a small part.
The knowledge is stored in the strength of these connections, so that some inputs have a large effect on the activity within the program and some have only a small effect. However, the individual connections do not represent individual pieces of knowledge. Instead, the knowledge is spread out over all the connections - each connection doesn't do much on its own, but together they produce an intelligent result.

A good analogy for sub-symbolic AI is that of an anthill. The individual ants are rather stupid. They have small brains and are controlled mainly by chemical signals. However, the colony of ants as a whole seems to have some "group intelligence" that arises out the ants working together. The whole is more than the sum of the parts.

Which is better?

Each approach has its advantages and disadvantages.
Symbolic AI is easy to understand. You can immediately see how rules are translated into program statements. Sub-Symbolic AI is a lot less obvious. It is impossible to take part of the program and say "Ah, this part represents such-and-such a rule."

However, symbolic AI only works if the knowledge that you are trying to capture can be put into rules and named variables. Imagine how hard it would be to set up a series of rules that could distinguish between pictures of male faces and pictures of female faces. Sub-symbolic AI is programming without rules - the connections inside the program evolve under their own control until the program has learned the knowledge.

Sub-symbolic AI is often called Neural Networks, or Connectionism, and is dealt with in greater detail in another section of this tutorial.

Of course, there is no reason why an intelligent system should be exclusively symbolic or sub-symbolic. The parts of the program that work better using sub-symbolic AI can be implemented using neural networks, and the results that they produce can be processed by symbolic AI systems. In this way, we get the best of both worlds!"
http://richardbowles.tripod.com/ai_guide/intro.htm#symbol

Optimality Theory from a cognitive science perspective:
http://www.deepdyve.com/lp/de-gruyter/optimality-theory-from-a-cognitive-science-perspective-iXtzWfSYty

"Situation theory is an information theoretic mathematical ontology developed to support situation semantics, an alternative semantics to the better known possible world semantics originally introduced in the 1950s. Rather than a semantics based on total possible worlds, situation semantics is a relational semantics of partial worlds called situations. Situations support (or fail to support) items of information, variously called states of affairs or infons. The partial nature of situations gives situation theory and situation semantics a flexible framework in which to model information and the context-dependent meaning of sentences in natural language."
http://jacoblee.net/occamseraser/2010/10/03/introduction-to-situation-theory-part-1/

Information Retrieval and Situation Theory

“In the beginning there was information. The word came later.” Fred I. Dretske

The explosive growth of information has made it a matter of survival for companies to have at their disposal good information retrieval tools. Information storage is becoming cheaper, and because of this more voluminous. Due to this fact a growing amount of (expensive) information disappears unused (or unread) simply because there are no possibilities to retrieve this information effectively. The problem, however, has been studied for years, as what has become known as ”the information retrieval problem” and can be described as follows:

“In which way relevant information can be distinguished from irrelevant information corresponding a certain information need”.

Systems trying to solve this problem automatically are called information retrieval (IR) systems. These systems which are developed from a defined model try to furnish a solution for the problem. There are various models of IR systems; the most publicized ones are the Boolean, the Vector Space [1], and the Probabilistic models [2, 3]. More recently, van Rijsbergen [4] suggested a model of an IR system based on logic because the use of an adequate logic provides all the necessary concepts to embody the different functions of an IR system."
http://www.cs.uu.nl/research/techreps/repo/CS-1996/1996-04.pdf

"Formal concept analysis refers to both an unsupervised machine learning technique and, more broadly, a method of data analysis. The approach takes as input a matrix specifying a set of objects and the properties thereof, called attributes, and finds both all the "natural" clusters of attributes and all the "natural" clusters of objects in the input data, where

a "natural" object cluster is the set of all objects that share a common subset of attributes, and

a "natural" property cluster is the set of all attributes shared by one of the natural object clusters.

Natural property clusters correspond one-for-one with natural object clusters, and a concept is a pair containing both a natural property cluster and its corresponding natural object cluster. The family of these concepts obeys the mathematical axioms defining a lattice, and is called a concept lattice (in French this is called a Treillis de Galois because the relation between the sets of concepts and attributes is a Galois connection)."
http://en.wikipedia.org/wiki/Formal_concept_analysis

Using Situation Lattices to Model and Reason about Context

"Much recent research has focused on using situations rather than individual pieces of context as a means to trigger adaptive system behaviour. While current research on situations emphasises their representation and composition, they do not provide an approach on how to organise and identify their occurrences efficiently. This paper describes how lattice theory can be utilised to organise situations, which reflects the internal structure of situations such as generalisation and dependence.
We claim that situation lattices will prove beneficial in identifying situations,
and maintaining the consistency and integrity of situations. They will also help in resolving the uncertainty issues inherent in context and situations by working with Bayesian Networks."
http://www.simondobson.org/softcopy/mrc-lattices-07.pdf

A Categorial View at Generalized Concept Lattices: Generalized Chu Spaces
http://cla.inf.upol.cz/papers/cla2006/short/paper6.pdf

Using Hypersets to Represent Semistructured Data
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.4993

Integrating Information on the Semantic Web Using Partially Ordered Multi Hypersets

"The semantic web is supposed to be a global, machine-readable information re-pository, but there is no agreement on a single common information metamodel. To prevent the fragmentation of this nascent semantic web, this thesis introduces the expressive and flexible Braque metamodel. Based on non-well-founded par-tially ordered multisets (hyper pomsets), augment with a powerful reflection mechanism, the metamodel supports the automated, lossless, semantically trans-parent integration of relationship-based metamodels. Complete mappings to Braque from the Extensible Markup Language (XML), the Resource Description Framework (RDF), and the Topic Maps standard are provided. The mappings place information at the same semantic level, allowing uniform navigation and queries without regard for the original model boundaries."
http://www.ideanest.com/braque/Thesis-web.pdf

A Logic of Complex Values

"Our world is run by a logic that has no room for values, by a scientific methodology that disdains the very notion. In this paper we try to redress the balance, extracting many modern scientific findings and forms of philosophical reasoning from the field of complex systems, to show that values can and should be made part of an enhanced normative logic derived from Neutrosophy. This can then be employed to quantitatively evaluate our beliefs based on their dynamic effects on a full set of human values."

Keywords and Phrases: Complex Systems, Axiology, Neutrosophic Logic, Intrinsic Values, Synergy, Dynamical Fitness, Attractors, Connectivity, Holarchy, Teleology, Agents
http://www.calresco.org/lucas/logic.htm

Intentionally and Unintentionally. On Both, A and Non-A, in Neutrosophy

"The paper presents a fresh new start on the neutrality of neutrosophy in that "both A and Non-A" as an alternative to describe Neuter-A in that we conceptualize things in both intentional and unintentional background. This unity of opposites constitutes both objective world and subjective world. The whole induction of such argument is based on the intensive study on Buddhism and Daoism including I-ching. In addition, a framework of contradiction oriented learning philosophy inspired from the Later Trigrams of King Wen in I-ching is meanwhile presented. It is shown that although A and Non-A are logically inconsistent, but they are philosophically consistent in the sense that Non-A can be the unintentionally instead of negation that leads to confusion. It is also shown that Buddhism and Daoism play an important role in neutrosophy, and should be extended in the way of neutrosophy to all sciences according to the original intention of neutrosophy.
...
Neutrosophy is a new branch of philosophy that studies the origin, nature, and scope
of neutralities, as well as their interactions with different ideational spectra.
It is the base of neutrosophic logic, a multiple value logic that generalizes the fuzzy logic and deals with paradoxes, contradictions, antitheses, antinomies."
http://arxiv.org/abs/math.GM/0201009

Neutrosophy

"This paper is a part of a National Science Foundation interdisciplinary project proposal and introduces a new viewpoint in philosophy, which helps to the generalization of classical 'probability theory', 'fuzzy set' and 'fuzzy logic' to
, and respectively.
One studies connections between mathematics and philosophy, and mathematics and other disciplines as well (psychology, sociology, economics)."
http://arxiv.org/ftp/math/papers/0010/0010099.pdf

PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON NEUTROSOPHY, NEUTROSOPHIC LOGIC, NEUTROSOPHIC SET, NEUTROSOPHIC PROBABILITY AND STATISTICS
http://fs.gallup.unm.edu//neutrosophicProceedings.pdf

Neutrosophy in Situation Analysis
http://www.fusion2004.foi.se/papers/IF04-0400.pdf

Comments to Neutrosophy
http://arxiv.org/ftp/math/papers/0111/0111237.pdf

TELESOPHY: A SYSTEM FOR MANIPULATING THE KNOWLEDGE OF A COMMUNITY

"A telesophy system provides a uniform set of commands to manipulate knowledge within a community. Telesophy literally means "wisdom at a distance". Such a system can effectively handle large amounts of information over a network by making transparent the data type and physical location. This is accomplished by supporting a uniform information space of units of information. Users can browse through this space, filtering and grouping together related information units, then share these with other members of the community."
http://www.canis.uiuc.edu/projects/telesophy/telesophy002.html

Towards Telesophy: Federating All the World's Knowledge

"The Net is the global network, which enables users worldwide to interact with information. As new technologies mature, the functions of the protocols deepen, moving closer to cyberspace visions of "being one with all the world's knowledge". The Evolution of the Net has already proceeded from data transmission in the Internet to information retrieval in the Web. The global protocols are evolving towards knowledge navigation in the Interspace, moving from syntax to semantics. In the future, infrastructure will support analysis, for interactive correlations across knowledge sources. This moves closer towards "telesophy"."
http://krustelkram.blogspot.com/2009/03/towards-telesophy-federating-all-world.html

"The final quadrant’s emphasis is on maturing society’s shared lesson set to yield a coherent sense of the whole. It seeks to integrate the disparate knowledge contained in its members’ individual lesson sets, to reflect self-critically on diverse experiences and search for unifying principles that give guidance over time and in a wide variety of circumstances. It is a society that grows in collective wisdom. I call this the “koinosophic society,” coined from the Greek “koinos,” or “common,’ and “Sophia,” for “wisdom.”"
http://www.learndev.org/dl/VS3-00g-LearnSocMultDim.PDF

A Holoinformational Model of Consciousness
http://www.quantumbiosystems.org/admin/files/QBS3%20207-220.pdf

Wednesday, February 16, 2011

Diagonalization, Fixed Points, Hyperset Models, Renormalized Rationality, Bayesian Epistemology, Reflective Equilibrium, Metacausal Self-Determinacy



Four Main Aspects of Yin and Yang Relationship

Yin-Yang are opposites

They are either on the opposite ends of a cycle, like the seasons of the year, or, opposites on a continuum of energy or matter. This opposition is relative, and can only be spoken of in relationships. For example: Water is Yin relative to steam but Yang relative to ice. Yin and Yang are never static but in a constantly changing balance.

Interdependent: Can not exist without each other

The Tai Ji (Supreme Ultimate) diagram shows the relationship of Yin & Yang and illustrates interdependence on Yin & Yang. Nothing is totally Yin or totally Yang. Just as a state of total Yin is reached, Yang begins to grow. Yin contains seed of Yang and vise versa. They constantly transform into each other. For Example: no energy without matter, no day without night. The classics state: "Yin creates Yang and Yang activates Yin".

Mutual consumption of Yin and Yang

Relative levels of Yin Yang are continuously changing. Normally this is a harmonious change, but when Yin or Yang are out of balance they affect each other, and too much of one can eventually weaken (consume) the other.

Four (4) possible states of imbalance:

Preponderance (Excess) of Yin
Preponderance (Excess) of Yang
Weakness (Deficiency) of Yin
Weakness (Deficiency) of Yang

Inter-transformation of Yin and Yang.

One can change into the other, but it is not a random event, happening only when the time is right. For example: Spring only comes when winter is finished.
Image/Quote: http://www.sacredlotus.com/theory/yinyang.cfm

"Before going into these two parts, I want to say something about the rather peculiar philosophical outlook that is presupposed in my attempt to combine, of course without confusing them, historical exegesis with criticism. It may be summarized in the following two points:

a) Oppositions are of general importance. They play a prominent role everywhere, both in human life and outside it, both in language, in human relationships, in human conflicts, in personal development, in society, history, art, religion, philosophy, all the sciences (including logic and mathematics) and in their subject matter. Because they are to be found everywhere, it is impossible to define oppositions as such or to reduce them to something else. A vantage point outside their realm cannot be found. For if it existed, it would be, in virtue of that, be opposed to oppositions.

b) We are acquainted with them. Every child knows that great is opposed to small, warm to cold, even to odd, inside to outside and yes to no. Nevertheless it is difficult to understand oppositions as such, especially those we are involved in. This difficulty is subjective. There is nothing problematic about oppositions themselves. The problem is ours. And its source is our unwillingness to acknowledge it. That is the main point of my philosophical orientation: I am opposed to the widespread view that as a matter of course we do understand oppositions.

As far as I can see, this illusion is mainly due to ignoring that antipoles are counterparts. As such, i.e. in virtue of their being opposed to each other, they must have something in common. Making their opposition possible, this something cannot be outside the antipoles. For example: the journey from A to B is opposed to the journey from B to A; even more so if the very same road is followed. But the different journeys cannot be opposed to each other, unless the road is such, that it can be travelled in two opposite directions. Or, to give a similar example, heads and tails are opposite sides of one and the same coin. There must be something, different from the two sides, that has them both. Nevertheless, it is impossible to isolate the coin from the two sides it has."
http://ctmucommunity.org/wiki/Isotelesis#Russell.27s_Paradox

"Reflective equilibrium is a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments. Although he did not use the term, philosopher Nelson Goodman introduced the method of reflective equilibrium as an approach to justifying the principles of inductive logic. The term 'reflective equilibrium' was coined by John Rawls and popularized in his celebrated A Theory of Justice as a method for arriving at the content of the principles of justice.

Rawls argues that human beings have a "sense of justice" which is both a source of moral judgment and moral motivation. In Rawls's theory, we begin with "considered judgments" that arise from the sense of justice. These may be judgments about general moral principles (of any level of generality) or specific moral cases. If our judgments conflict in some way, we proceed by adjusting our various beliefs until they are in "equilibrium," which is to say that they are stable, not in conflict, and provide consistent practical guidance. Rawls argues that a set of moral beliefs in ideal reflective equilibrium describes or characterizes the underlying principles of the human sense of justice.

An example of the method of reflective equilibrium may be useful. Suppose Zachary believes in the general principle of always obeying the commands in the Bible, and mistakenly thinks that these are completely encompassed by every Old Testament command. Suppose also that he thinks that it is not ethical to stone people to death merely for being Wiccan. These views may come into conflict (see Exodus 22:18, but see John 8:7). If they do, Zachary will then have several choices. He can discard his general principle in search of a better one (for example, only obeying the Ten Commandments), modify his general principle (for example, choosing a different translation of the Bible, or including Jesus' teaching from John 8:7 "If any of you is without sin, let him be the first to cast a stone" into his understanding), or change his opinions about the point in question to conform with his theory (by deciding that witches really should be killed). Whatever the decision, he has moved toward reflective equilibrium.

Reflective equilibrium serves an important justificatory function within Rawls's political theory. The nature of this function, however, is disputed. The dominant view, best exemplified by the work of Norman Daniels and Thomas Scanlon, is that the method of reflective equilibrium is a kind of coherentist method for the epistemic justification of moral beliefs. However, in other writings, Rawls seems to argue that his theory bypasses traditional metaethical questions, including questions of moral epistemology, and is intended instead to serve a practical function. This provides some motivation for a different view of the justificatory role of reflective equilibrium. On this view, the method of reflective equilibrium serves its justificatory function by linking together the cognitive and motivational aspects of the human sense of justice in the appropriate way.

Rawls argues that candidate principles of justice cannot be justified unless they are shown to be stable. Principles of justice are stable if, among other things, the members of society regard them as authoritative and reliably compliant with them. The method of reflective equilibrium determines a set of principles rooted in the human sense of justice, which is a capacity that both provides the material for the process of reflective equilibration and our motivation to adhere to principles we judge morally sound. The method of reflective equilibrium serves the aim of defining a realistic and stable social order by determining a practically coherent set of principles that are grounded in the right way in the source of our moral motivation, such that we will be disposed to comply with them. As Fred D'Agostino puts it, stable principles of justice will require considerable "up-take" by the members of society. The method of reflective equilibrium provides a way of settling on principles that will achieve the kind of "up-take" necessary for stability.

Reflective equilibrium is not static; it will change as the individual considers his opinions about individual issues or explores the consequences of his principles.

Rawls applied this technique to his conception of a hypothetical original position from which people would agree to a social contract. He arrived at the conclusion that the optimal theory of justice is the one to which people would agree from behind a veil of ignorance, not knowing their social positions. See the article on John Rawls for more information about his theory."
http://en.wikipedia.org/wiki/Reflective_equilibrium

Cognitive science and the law

"Numerous innocent people have been sent to jail based directly or indirectly on normal, but flawed, human perception, memory and decision making. Current cognitive-
science research addresses the issues that are directly relevant to the connection between normal cognitive functioning and such judicial errors, and suggests means by which the false-conviction rate could be reduced. Here, we illustrate how this can be achieved by reviewing recent work in two related areas: eyewitness testimony and fingerprint analysis. We articulate problems in these areas with reference to specific legal cases and demonstrate how recent findings can be used to address them. We also discuss how researchers can translate their conclusions into language and ideas that can influence and improve the legal system."
http://cognitrn.psych.indiana.edu/busey/HomePage/Preprints/TicsBuseyLoftus.pdf

Ethics, Law and the Challenge of Cognitive Science
...
"A currently explored approach in the study of morality, law and mind is a mentalist theory of ethics and law. It tries to reconstruct the idea of human practical reason with the conceptual tools mainly developed in a certain part of the multi-faceted modern theory of the mind. Of special importance is the study of language. Modern linguistics have gained importance way beyond the concrete field of understanding the world of language by providing insights in the general structure of the human mind and its higher mental faculties. Generative Grammar has made the assumption plausible, that human beings possess a language faculty with inborn properties – a universal grammar – that determines the possible properties human natural languages may have. The language faculty is the cognitive precondition of the possibility of language. The determination of the properties of universal grammar are now the object of very advanced, highly complex studies, with fascinating results. Given the explanatory power of this mentalist approach to the study of language, the question has been asked for years, whether practical philosophy could be informed by this approach."
http://www.germanlawjournal.com/pdfs/Vol08No06/PDF_Vol_08_No_06_577-616_Articles_Mahlmann.pdf

A Cognitive Science Perspective on Legal Incentives:
http://law.wustl.edu/faculty/documents/drobak/drobakoncognitivescience.pdf

Institute for Ethics & Emerging Technologies:
http://ieet.org/index.php/IEET/IEETblog

FUNCTIONAL LAW AND ECONOMICS: THE SEARCH FOR VALUE-NEUTRAL PRINCIPLES OF LAWMAKING

"During its relatively short history, the law and economics movement has developed three distinct schools of thought. The first two schools of thought—the positive school and the normative school-- developed almost concurrently. The positive school, historically associated with the early contributions of the Chicago school, restricts itself to the descriptive study of the incentives produced by the legal
system largely because its adherents believe that efficient legal rules evolve naturally. On the other hand, the normative school, historically associated with the early contributions of the Yale school, sees the law as a tool for remedying “failures” that arise in the market.

The subsequently developed functional school of law and economics draws from public choice theory and the constitutional paradigm f the Virginia school of economics, and offers a third erspective which is neither fully positive nor fully normative. Recognizing that there are structural forces that often impede the development
of efficient legal rules, the functional school allows for the possibility of using insights from public choice economics to remedy faulty legal rules at a meta-level. However, unlike the normative school, the functional school also recognizes that there are failures in the political market that make it unlikely that changes will be made on a principled basis. Also, because it is difficult to identify all of the
ultimate consequences of corrective legal rules, the functional school focuses on using economic theory to design legal meta-rules that lead to efficiency ex ante. Achieving this ex ante efficiency requires the design of legal institutions that induce individuals to internalize the effects of their private activities, as well as to reveal their true preferences in situations where collective decisions must be made.

In addition to these overarching differences about the role of law and economics in the design of legal institutions, there are other methodological differences among these schools of thought. These differences are illustrated by the debate on how to define efficiency on the individual decision-making level and in the aggregate. Specifically, the schools often take different stands on how social preferences
should be evaluated and what exactly should be maximized to achieve an optimal legal system. In the sections that follow, we lay out the development of these schools of thought, detailing where they differ methodologically."
http://www.cklawreview.com/wp-content/uploads/vol79no2/Parisi_Klick.pdf

"Functional law and economics bypasses the wealth/utility divide by focusing on choice or revealed preference as the criterion of decision. That is, by designing mechanisms through which parties are induced to reveal their subjective preferences, the functional law and economics approach obviates the need for third parties, such as judges or legislators, to decide between wealth and utility as the appropriate maxi-mand. The institutions favored by the functional approach minimize the impediments to the full revelation of the subjective preferences of the parties to a transaction by fo-cusing on incentive compatibility mechanisms. This mechanism-design approach tends to align individual and social optimality."
http://128.122.51.12/ecm_dlv2/groups/public/@nyu_law_website__journals__journal_of_law_and_liberty/documents/documents/ecm_pro_060909.pdf

Behavioral, Social, and Computer Science Series:
http://bscs3.calit2.net/ps.html

"We survey the recent literature on the role of information in mechanism design. First, we discuss an emerging literature on the role of endogenous payoff and strategic information for the design and the efficiency of the mechanism. We specifically consider information management in the form of acquisition of new information or disclosure of existing information. Second, we argue that in the presence of endogenous information, the robustness of the mechanism to the type space and higher order beliefs becomes a natural desideratum. We discuss recent approaches to robust mechanism design and robust implementation."
http://cowles.econ.yale.edu/P/cd/d15a/d1532-r.pdf

Incentive Centered Design
Making the Internet Safe, Fun, and Profitable
...
"Robust Mechanism Design and Implementation: A Selective Survey"

The theory of mechanism design helps us understand institutions ranging from simple trading rules to political constitutions. We can understand institutions as the solution to a well defined planner’s problem of achieving some objective or maximizing some utility function subject to incentive constraints.A common criticism of mechanism design theory is that the optimal mechanisms solving the well defined planner’s problem seem too sensitive to the assumed structure of the environment. We suggest a robust formulation of the mechanism design and implementation problem. The talk will be based on past and current work by the authors.
http://stiet.si.umich.edu/node/524
http://www.calit2.net/events/popup.php?id=1448

Harsanyi Type Spaces with Knowledge Operators

In this paper, we provide a notion of structure preserving maps (i.e. knowledge-belief morphisms) between knowledge-belief spaces. Then we show that - under the condition that the knowledge operators of the players in a knowledge-belief space operate only on measurable subsets of the space - there is a unique (up to isomorphism) universal knowledge-belief space to which every knowledge-belief space can be mapped by a unique knowledge-belief morphism.
...
Type spaces in the sense of Harsanyi (1967/68) and knowledge spaces are the most important tools to understand games of incomplete information. Player’s uncertainty in a type space is represented by a σ-additive probability measure over the space. The knowledge spaces, respectively Harsanyi type spaces, used in applications are usually finite. Regardless, they typically do not contain enough states so as to represent all the potential states of mind that the players could possibly have about
the interaction at hand. This poses the question whether the missing states prevent a correct analysis of the problem.

Mertens and Zamir (1985) showed that this problem does not arise for Harsanyi type spaces. They showed that, under suitable topological assumptions on the type spaces there is a “largest” Harsanyi type space which “contains” all possible states of the world. That is, they showed the existence of a universal Harsanyi type space to which every Harsanyi type space can be mapped by a morphism. A morphism is a map that preserves the state of nature and the beliefs of the players. Therefore, the analysis carried out in a finite or otherwise restrictive Harsanyi type space could be transferred, intact, to the universal Harsanyi type space.
http://www.tark.org/proceedings/tark_jul10_05/p126-meier.pdf

Harsanyi Type Spaces and Final Coalgebras Constructed from Satisfied Theories

"This paper connects coalgebra with a long discussion in the foundations of game theory on the modeling of type spaces. We argue that type spaces are coalgebras, that universal type spaces are final coalgebras, and that the modal logics already proposed in the economic theory literature are closely related to those in recent work in coalgebraic modal logic. In the other direction, the categories of interest in this work are usually measurable spaces or compact (Hausdorff) topological spaces. A coalgebraic version of the construction of the universal type space due to Heifetz and Samet [Journal of Economic Theory 82 (2) (1998) 324–341] is generalized for some functors in those categories. Since the concrete categories of interest have not been explored so deeply in the coalgebra literature, we have some new results. We show that every functor on the category of measurable spaces built from constant functors, products, coproducts, and the probability measure space functor has a final coalgebra. Moreover, we construct this final coalgebra from the relevant version of coalgebraic modal logic. Specifically, we consider the set of theories of points in all coalgebras and endow this set with a measurable and coalgebra structure."
http://doi:10.1016/j.entcs.2004.02.036

The Belief Hierarchy Representation of Harsanyi Type Spaces with Redundancy

In the standard Bayesian formulation of games of incomplete information, some types may represent the same hierarchy of beliefs over the given set of basic uncertainty. Such types are called redundant types. Redundant types present an obstacle to the Bayesian analysis because Mertens-Zamir approach of embedding type spaces into the universal type space can only be applied without redundancies. Also, because redundant structures provide different Bayesian equilibrium predictions (Liu [24]), their existence has been an obstacle to the universal argument of Bayesian games. In this paper, we show that every type space, even if it has redundant types, can be embedded into a space of hierarchy of beliefs by adding an appropriate payoff irrelevant parameter space. And, for any type space, the parameter space can be chosen to be the space {0, 1}. Moreover, Bayesian equilibrium is characterized by this “augmented” hierarchy of beliefs. In this process, we show that the syntactic characterization of types by Sadzik [31] is essentially the same as whether or not they can be mapped to the same “augmented” hierarchy of beliefs. Finally, we show that the intrinsic correlation in Brandenburger-Friedenberg [10] can be interpreted as a matter of redundant types and we can obtain their results in our framework.
http://www.gtcenter.org/Archive/2010/WEGT/Yokotani1005.pdf

"In game theory, a Bayesian game is one in which information about characteristics of the other players (i.e. payoffs) is incomplete. Following John C. Harsanyi's framework, a Bayesian game can be modelled by introducing Nature as a player in a game. Nature assigns a random variable to each player which could take values of types for each player and associating probabilities or a probability density function with those types (in the course of the game, nature randomly chooses a type for each player according to the probability distribution across each player's type space). Harsanyi's approach to modelling a Bayesian game in such a way allows games of incomplete information to become games of imperfect information (in which the history of the game is not available to all players). The type of a player determines that player's payoff function and the probability associated with the type is the probability that the player for whom the type is specified is that type. In a Bayesian game, the incompleteness of information means that at least one player is unsure of the type (and so the payoff function) of another player.

Such games are called Bayesian because of the probabilistic analysis inherent in the game. Players have initial beliefs about the type of each player (where a belief is a probability distribution over the possible types for a player) and can update their beliefs according to Bayes' Rule as play takes place in the game, i.e. the belief a player holds about another player's type might change on the basis of the actions they have played. The lack of information held by players and modelling of beliefs mean that such games are also used to analyse imperfect information scenarios."
http://en.wikipedia.org/wiki/Bayesian_game

"Type spaces are mathematical structures used in theoretical parts of economics and game theory. The are used to model settings where agents are described by their types, and these types give us “beliefs about the world”, “beliefs about each other's beliefs about the world”, “beliefs about each other's beliefs about each other's beliefs about the world”, etc. That is, the formal concept of a type space is intended to capture in one structure an unfolding infinite hierarchy related to interactive belief.

John C. Harsanyi (1967) makes a related point as follows:

It seems to me that the basic reason why the theory of games with incomplete information has made so little progress so far lies in the fact that these games give rise, or at least appear to give rise, to an infinite regress in reciprocal expectations on the part of the players.

This quote is from the landmark papers he published in 1967 and 1968, showing how to convert a game with incomplete information into one with complete yet imperfect information. Many of the details are not relevant here and would take us far from our topic. But three points are noteworthy. First, the notion of types goes further than what we described above: an agent A's type involves (or induces) beliefs about the types of the other agents, their types involve beliefs about A's type, etc. So the notion of a type is already circular.

Second, despite this circularity, the informal concept of a universal type space (as a single type space in which all types may be found) is widespread in areas of non-cooperative game theory and economic theory. And finally, the formalization of type spaces was left open: Harsanyi did not really formalize type spaces in his original papers (he used the notion); this was left to later researchers."
http://plato.stanford.edu/entries/nonwellfounded-set-theory/harsanyi-type-spaces.html

AXIOMATIC FOUNDATIONS OF HARSANYI'S TYPE SPACE AND SOLUTION CONCEPTS

"The standard model of games with incomplete information is the Harsanyi's
type space. We argue that the standard model has some interpretational difficulties: the definition of a type is not clear, and it is not clear how to discuss the restrictions imposed by the standard model. In order to address these di¢ culties, we propose an alternative way to describe games in a language that refers only to objective outcomes like states of Nature and actions. The objective description is more general than the type space in the sense that all type spaces have an objective description, but not all objectve descriptions have a type space representation. We provide conditions that are necessary and (almost) sufficient for the objective description to be derived from a type space. Finally, we argue that that the objective description contains all (and not more) information that is necessary for the decision theoretic analysis. In particular, we argue that many standard solution concepts can be understood as an application of Common Knowledge of Rationality on different kinds of objective descriptions."
http://www.saet.illinois.edu/papers_and_talks/event-04/Harsanyi%20foundations%20V.pdf

"A Chu space is a transformable matrix whose rows transform forwards while its columns transform backwards.

Generality of Chu spaces Chu spaces unify a wide range of mathematical structures, including the following.

Relational structures such as sets, directed graphs, posets, and small categories.
Algebraic structures such as groups, rings, fields, modules, vector spaces, lattices, and Boolean algebras.
Topologized versions of the above, such as topological spaces, compact Hausdorff spaces, locally compact abelian groups, and topological Boolean algebras.

Algebraic structures can be reduced to relational structures by a technique described below. Relational structures constitute a large class in their own right. However when adding topology to relational structures, the topology cannot be incorporated into the relational structure but must continue to use open sets.

Chu spaces offer a uniform way of representing relational and topological structure simultaneously. This is because Chu spaces can represent relational structures via a generalization of topological spaces which allows them to represent topological structure at the same time using the same machinery."
http://chu.stanford.edu/

"A fixed point combinator (or fixed-point operator) is a higher-order function that computes a fixed point of other functions. A fixed point of a function f is a value x such that f(x) = x. For example, 0 and 1 are fixed points of the function f(x) = x2, because 02 = 0 and 12 = 1. Whereas a fixed-point of a first-order function (a function on "simple" values such as integers) is a first-order value, a fixed point of a higher-order function f is another function p such that f(p) = p. A fixed point combinator, then, is a function g which produces such a fixed point p for any function f:

p = g(f), f(p) = p

or, alternatively:

f(g(f)) = g(f).

Because fixed-point combinators are higher-order functions, their history is intimately related to the development of lambda calculus."
http://en.wikipedia.org/wiki/Fixed_point_combinator

"In his 1975 article "Outline of a Theory of Truth", Kripke showed that a language can consistently contain its own truth predicate, which was deemed impossible by Alfred Tarski, a pioneer in the area of formal theories of truth. The approach involves letting truth be a partially defined property over the set of grammatically well-formed sentences in the language. Kripke showed how to do this recursively by starting from the set of expressions in a language which do not contain the truth predicate, and defining a truth predicate over just that segment: this action adds new sentences to the language, and truth is in turn defined for all of them. Unlike Tarski's approach, however, Kripke's lets "truth" be the union of all of these definition-stages; after a denumerable infinity of steps the language reaches a "fixed point" such that using Kripke's method to expand the truth-predicate does not change the language any further. Such a fixed point can then be taken as the basic form of a natural language containing its own truth predicate. But this predicate is undefined for any sentences that do not, so to speak, "bottom out" in simpler sentences not containing a truth predicate. That is, " 'Snow is white' is true" is well-defined, as is " ' "Snow is white" is true' is true," and so forth, but neither "This sentence is true" nor "This sentence is not true" receive truth-conditions; they are, in Kripke's terms, "ungrounded."
http://en.wikipedia.org/wiki/Saul_Kripke

Fixed Points and Self-Reference:
http://www.emis.de/journals/HOA/IJMMS/Volume7_2/698458.pdf

PARADOXES, SELF-REFERENCE AND TRUTH IN THE 20TH CENTURY
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.196.5918&rep=rep1&type=pdf

Translating the Hypergame Paradox: Remarks on the set of founded elements of a relation:
http://dare.uva.nl/document/29721

"We want to end our discussion of examples of circularity in mathematics with occurrences of circularity in some very obvious and basic mathematical formulations. First, we make the following observation: On the one hand,circularity, or more precisely self-referential sentences, were used to prove some of the most important results in the foundations of mathematics. On the other hand, circular concepts can be used to introduce new mathematical objects. We will see a set theoretical example later: the introduction of circular sets (hypersets) can be based on solutions of set theoretical equations (or more generally systems of equations)."
http://tobias-lib.uni-tuebingen.de/volltexte/2002/477/pdf/Diss11.pdf

Hypersets as Truth Values
http://www.ritsumei.ac.jp/se/~tjst/doc/tjst/998-hstv.pdf

A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points

"The point of these observationsis not the reduction of the familiar to the unfamiliar[...]but the extension of the familiar to cover many more cases." - Saunders MacLane, Categories for the Working Mathematician Page 226

Following F. William Lawvere, we show that many self-referential paradoxes, incompleteness theorems and fixed point theorems fall out of the same simple scheme. We demonstrate these similarities by showing how this simple scheme encompasses the semantic paradoxes, and how they arise as diagonal arguments and fixed point theorems in logic, computability theory, complexity theory and formal language theory.
http://tinyurl.com/4dnl73y

Self-reference and fixed points: A discussion and an extension of Lawvere's Theorem

"We consider an extension of Lawvere's Theorem showing that all classical results on limitations (i.e. Cantor, Russel, Godel, etc.) stem from the same underlying connection between self-referentiality and fixed points. We first prove an even stronger version of this result. Secondly, we investigate the Theorem's converse, and we are led to the conjecture that any structure with the fixed point property is a retract of a higher reflexive domain, from which this property is inherited. This is proved here for the category of chain complete posets with continuous morphisms. The relevance of these results for computer science and biology is briefly considered.

Key words Self-reference - fixed points - cartesian closed categories - diagonal arguments - continuous lattices - biological autonomy - reflexive domains - fractals"
http://www.springerlink.com/content/h881p406u0378850/

"In mathematical logic, the diagonal lemma or fixed point theorem establishes the existence of self-referential sentences in formal theories of the natural numbers, if those theories are strong enough to represent all computable functions. Such sentences can be used to prove fundamental results such as Gödel's incompleteness theorems and Tarski's indefinability theorem."
http://en.wikipedia.org/wiki/Diagonal_lemma

"Although magic squares have no immediate practical use, they have always exercised a great influence upon thinking people. It seems to me that they contain a lesson of great value in being a palpable instance of the symmetry of mathematics, throwing thereby a clear light upon the order that pervades the universe wherever we turn, in the infinitesimally small interrelations of atoms as well as in the immeasurable domain of the starry heavens, an order which, although of a different kind and still more intricate, is also traceable in the development of organized life, and even in the complex domain of human action.

Magic squares are a visible instance of the intrinsic harmony of the laws of number, and we are thrilled with joy at beholding this evidence which reflects the glorious symmetry of the cosmic order.

Pythagoras says that number is the origin of all things, and certainly the law of number is the key that unlocks the secrets of the universe. But the law of number possesses an immanent order, which is at first sight mystifying, but on a more intimate acquaintance we easily understand it to be intrinsically necessary; and this law of number explains the wondrous consistency of the laws of nature. Magic squares are conspicuous instances of the intrinsic harmony of number, and so they will serve as an interpreter of the cosmic order that dominates all existence.

Magic squares are a mere intellectual play that illustrates the nature of mathematics, and, incidentally, the nature of existence dominated by mathematical regularity. They illustrate the intrinsic harmony of mathematics as well as the intrinsic harmony of the laws of the cosmos.

In arithmetic we create a universe of figures by the process of counting; in geometry we create another universe by drawing lines in the abstract field of imagination, laying down definite directions; in algebra we produce magnitudes of a still more abstract nature, expressed by letters. In all these cases the first step producing the general conditions in which we move, lays down the rule to which all
further steps are subject, and so every one of these universes is dominated by a consistency, producing a wonderful symmetry, which in the cosmic world has been called by Pythagoras "the harmony of the spheres."

There is no science that teaches the harmonies of nature more clearly than mathematics, and the magic squares are like a magic mirror which reflects a ray of the symmetry of the divine norm immanent in all things, in the immeasurable immensity of the cosmos not less than in the mysterious depths of the human mind."
http://www.archive.org/stream/magicsquarescube00andrrich/magicsquarescube00andrrich_djvu.txt

Mathematical Occultism and Its Explanation

"The peculiar interest of magic squares and all lusus numerorum in general lies in the fact that they possess the charm of mystery. They appear to betray some hidden intelligence which by a preconceived plan produces the impression of intentional design, a phenomenon which finds its close analogue in nature." -Paul Carus [3, p. vii]
http://tinyurl.com/45ncqnb

The Uncanny Logic of the Magic Square

Jonathan Leverkühn's Experiments

The uncanny logic of Fascism is perhaps most graphically captured in Mann's depiction of Leverkühn as a figure embodying both a coldly rational intellect and a fascination with magical phenomena.
...
Dialectic of Enlightenment goes some way toward explaining the ideological implications of the relationship between magic and science which surfaces in the laboratory scene. The symbol of this uncanny logic is, of course, the magic square which accompanies Leverkühn from his student days in Halle to his death. Zeitblom notices an engraving on the wall of Leverkühn's lodgings in Halle, representing a mathematical puzzle, 'a so-called magic square' (Mann 1968, 92). Although this puzzle is based on the completely rational logic of mathematics, the result, nevertheless, strikes Zeitblom as inexplicable: 'What the principle was upon which this magic uniformity rested I never made out' (Mann 1968, 92). In a highly paradoxical fashion, this excessively rational system is at the same time excessively irrational. When Mann has Zeitblom draw the reader's attention to the symbolic significance of the magic square, he suggests that Leverkühn's whole life and art stand under this uncanny combination of reason and superstition. Since, as we will see later, Leverkühn's twelve-tone technique exemplifies what is to Zeitblom this discomfiting paradox, Mann intimates by extrapolation that German National Socialism is similarly marked. Throughout Doctor Faustus, Leverkühn both affirms and denies the dialectical logic of Hegel's system just as he both affirms and denies the power of irrational forces.

The magic square tends to be read as a major indicator of Leverkühn's irrational or demonic tendencies; it seems to suggest that his music unleashes the dark forces which enlightened modernity had so successfully repressed. According to this interpretation, the German catastrophe has to be attributed to a lack of reason. This conventional view overlooks that the irrational effect of the magic sqaure is in fact the result of a totally rational system. Since this mathematical puzzle consists of a grid in which square has a number assigned to it, there is no space left that is not subject to rational calculation. It isn't that the rational space is subverted by a residual irrational element; on the contrary, the magical effect is produced by an excessive reliance on mathematical reasoning. It seems, then, that an analysis that is satisfied to identify 'barbarism' with the irrational tendencies of volkish ideas is blind to the 'barbarism' that Horkheimer and Adorno ascribe to the rationalizing forces of capitalistic modernity. Far from being the other of barbarism, reason is itself implicated in the demonic possibilities of the Enlightenment narrative which found their extreme articulation in German National Socialism."
http://tinyurl.com/4mn3yg3

The Number of “Magic” Squares, Cubes, and Hypercubes
http://www.msri.org/people/members/matthias/papers/magic.pdf

"The octonions are the largest of the four normed division algebras. While somewhat neglected due to their nonassociativity, they stand at the crossroads of many interesting fields of mathematics. Here we describe them and their relation to Clifford algebras and spinors, Bott periodicity, projective and Lorentzian geometry, Jordan algebras, and the exceptional Lie groups. We also touch upon their applications in quantum logic, special relativity and supersymmetry.
...
There are exactly four normed division algebras: the real numbers (R), complex numbers (C), quaternions (H), and octonions (O). The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on. The complex numbers are a slightly flashier but still respectable younger brother: not ordered, but algebraically complete. The quaternions, being noncommutative, are the eccentric cousin who is shunned at important family gatherings. But the octonions are the crazy old uncle nobody lets out of the attic: they are nonassociative.
...
In what follows we shall try to explain the octonions and their role in algebra, geometry, and topology. In Section 2 we give four constructions of the octonions: first via their multiplication table, then using the Fano plane, then using the Cayley–Dickson construction and finally using Clifford algebras, spinors, and a generalized concept of ‘triality’ advocated by Frank Adams [2]. Each approach has its own merits. In Section 3 we discuss the projective lines and planes over the normed division algebras — especially O — and describe their relation to Bott periodicity, the exceptional Jordan algebra, and the Lie algebra isomorphisms listed above. Finally, in Section 4 we discuss octonionic constructions of the exceptional Lie groups, especially the ‘magic square’."
http://arxiv.org/PS_cache/math/pdf/0105/0105155v4.pdf

MAGIC SQUARES OF LIE ALGEBRAS
http://arxiv.org/PS_cache/math/pdf/0001/0001083v2.pdf

Freudenthal magic square
http://en.wikipedia.org/wiki/Freudenthal_magic_square

"The concept of superrationality (or renormalized rationality) was coined by Douglas Hofstadter, in his article series and book "Metamagical Themas". Superrationality is a type of rational decision making which is different than the usual game-theoretic one, since a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a game-theoretically rational player will defect. Superrationality is not a mainstream model within game theory.

The idea of superrationality is that two logical thinkers analyzing the same problem will think of the same correct answer. For example, if two persons are both good at arithmetic, and both have been given the same complicated sum to do, it can be predicted that both will get the same answer before the sum is known. In arithmetic, knowing that the two answers are going to be the same doesn't change the value of the sum, but in game theory, knowing that the answer will be the same might change the answer itself."
http://en.wikipedia.org/wiki/Superrationality

"In quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, renormalization is any of a collection of techniques used to treat infinities arising in calculated quantities.

When describing space and time as a continuum, certain statistical and quantum mechanical constructions are ill defined. To define them, the continuum limit has to be taken carefully.

Renormalization determines the relationship between parameters in the theory, when the parameters describing large distance scales differ from the parameters describing small distances. Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspicious, provisional procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in several fields of physics and mathematics."
http://en.wikipedia.org/wiki/Renormalization

"The idea of locking-in recurs throughout science. In Gödel, Escher, Bach I discussed the phenomenon called renormalization -- the way that elementary particles such as electrons and positrons and photons all take each other into account in their very core. The notion is a mathematical one, but for a good metaphor, recall how your own identity depends on the identities of your close friends and relatives, and how theirs in turn depends on yours and on their close friends' and relatives' identities, and so on, and so on. This was the image I described for the "I at the center" in the Post Scriptum to Chapter 10. Another good graphic representation of this idea is shown in Figure 24-4, where identity emerges out of a renormalization process.

The tangledness of one's own self is a perfect metaphor for understanding what renormalization is all about. And the best way to image how you emerge from such a complex tangle is to begin by imagining yourself as a "zeroth-order person" -- that is, someone totally unaware and inconsiderate of all others. (Of course, such a person would barely be a person, barely a self at all: a perfect baby.) Then imagine how "you" would be modified if you started to take other people into account, always considering others as perfect babies, or zeroth order people. This gives a "first-order" version of you. You are beginning to have an identity, emerging from this modeling of others inside yourself. Now iterate: second-order people are those who take into account the identities of first-order people. And on it goes. The final result is renormalized people: people who take into account the identities of renormalized people. I know it sounds circular, and indeed it is, but paradoxical it is not -- at least no more than are the fixed points of Raphael Robinson's puzzle.
...
In short, locking-in -- that is, convergent and self-stabilizing behavior -- will surely pervade the ultimate explanation of most mysteries of the mind."
http://tinyurl.com/48pfb45

DIAGONAL ARGUMENTS AND CARTESIAN CLOSED CATEGORIES (Lawvere)
http://www.tac.mta.ca/tac/reprints/articles/15/tr15.pdf

Self-referential Paradoxes, Incompleteness and Fixed Points

"Who doesn’t like self-referential paradoxes? There is something about them that appeals to all and sundry. And, there is also a certain air of mystery associated with them, but when people talk about such paradoxes in a non-technical fashion indiscriminately, especially when dealing with Gödel’s incompleteness theorem, then quite often it gets annoying!

Lawvere in ‘Diagonal Arguments and Cartesian Closed Categories‘ sought, among several things, to demystify the incompleteness theorem. To pique your interest, in a self-commentary on the above paper, he actually has quite a few harsh words, in a manner of speaking.

“The original aim of this article was to demystify the incompleteness theorem of Gödel and the truth-definition theory of Tarski by showing that both are consequences of some very simple algebra in the cartesian-closed setting. It was always hard for many to comprehend how Cantor’s mathematical theorem could be re-christened as a“paradox” by Russell and how Gödel’s theorem could be so often declared to be the most significant result of the 20th century. There was always the suspicion among scientists that such extra-mathematical publicity movements concealed an agenda for re-establishing belief as a substitute for science.”

In the aforesaid paper, Lawvere of course uses the language of category theory – the setting is that of cartesian closed categories – and therefore the technical presentation can easily get out of reach of most people’s heads – including myself. Thankfully, Noson S. Yanofsky has written a nice paper, ‘A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points’, that is a lot more accessible and fun to read as well. Yanofsky employs only the notions of sets and functions, thereby avoiding the language of category theory, to bring out and make accessible as much as possible the content of Lawvere’s paper. Cantor’s theorem, Russell’s Paradox, the non-definability of satisfiability, Tarski’s non-definability of truth and Gödel’s (first) incompleteness theorem are all shown to be paradoxical phenomena that merely result from the existence of a cartesian closed category satisfying certain conditions. The idea is to use a single formalism to describe all these diverse phenomena."
...
"Jonathan: yes, I see your example quoted approvingly in Metamagical Themas. This was one of his earliest columns in Scientific American, but there is a rich mine of ideas there he keeps coming back to in later columns.

He mentions for instance Lee Sallows’s magnificent

Only the fool would take trouble to verify that his sentence was composed of ten a’s, three b’s, four c’s, four d’s, forty-six e’s, sixteen f’s, four g’s, thirteen h’s, fifteen i’s, two k’s, nine l’s, four m’s, twenty-five n’s, twenty-four o’s, five p’s, sixteen r’s, forty-one s’s, thirty-seven t’s, ten u’s, eight v’s, eight w’s, four x’s, eleven y’s, twenty-seven commas, twenty-three apostrophes, seven hyphens, and, last but not least, a single !

(Nice extra little nugget of self-reference in substituting ‘his’ for ‘this’ as the tenth word!)

Then Raphael Robinson proposed a fun little puzzle: complete the sentence

In this sentence, the number of occurrences of 0 is __, of 1 is __, of 2 is __, of 3 is __, of 4 is __, of 5 is __, of 6 is __, of 7 is __, of 8 is __, and of 9 is __.

Each blank is to be filled with a numeral in decimal notation. There are exactly two solutions.

In his chapter 16, he revisits such self-reference by showing how to solve Robinson’s puzzle and generate Sallows-like sentences by using convergence to fixed points!!"
http://topologicalmusings.wordpress.com/2010/07/11/self-referential-paradoxes-incompleteness-and-fixed-points/

From Collective Intentionality to Intentional Collectives: an Ontological Perspective
http://www.loa-cnr.it/Papers/IntentionalCollectives_APAversionREV3.pdf

Intentionality (John R. Searle):
http://books.google.com/books?id=nAYGcftgT20C&dq=intentional+self-reference&source=gbs_navlinks_s

Husserlian Meditations: How Words Present Things
http://tinyurl.com/6y3zq4a

"Salva veritate is the logical condition in virtue of which interchanging two expressions may be done without changing the truth-value of statements in which the expressions occur. The phrase occurs in two fragments from Gottfried Leibniz's General Science. Characteristics:

In Chapter 19, Definition 1, Leibniz writes: "Two terms are the same (eadem) if one can be substituted for the other without altering the truth of any statement (salva veritate)."

In Chapter 20, Definition 1, Leibniz writes: "Terms which can be substituted for one another wherever we please without altering the truth of any statement (salva veritate), are the same (eadem) or coincident (coincidentia). For example, 'triangle' and 'trilateral', for in every proposition demonstrated by Euclid concerning 'triangle', 'trilateral' can be substituted without loss of truth (salva veritate)."

The literal translation of the Latin "salva veritate" is "with unharmed truth", using ablative of manner."
http://en.wikipedia.org/wiki/Salva_veritate

"A significant number of physicalist philosophers subscribe to the task of reconciling the existence of intentionality with a physicalist ontology (for a forceful exposition see Field 1978, 78-79). On the assumption that intentionality is central to the mental, the task is to show, in Dennett's (1969, 21) terms, that there is not an “unbridgeable gulf between the mental and the physical” or that one can subscribe to both physicalism and intentional realism. Because intentional states are of or about things other than themselves, for a state to have intentionality is for it to have semantic properties. As Jerry Fodor (1984) put it, the naturalistic worry of intentional realists who are physicalists is that “the semantic proves permanently recalcitrant to integration to the natural order”. Given that on a physicalist ontology, intentionality or semantic properties cannot be “fundamental features of the world,” the task is to show “how an entirely physical system could nevertheless exhibit intentional states” (Fodor, 1987). As Dretske (1981) puts it, the task is to show how to “bake a mental cake using only physical yeast and flour”. Notice that Brentano's own view that ‘no physical phenomenon manifests’ intentionality is simply unacceptable to a physicalist. If physicalism is true, then some physical things are also mental things. The question for a physicalist is: does any non-mental thing manifest intentionality?"
http://plato.stanford.edu/entries/intentionality/

Quantum curiosities of psychophysics

"I survey some of the connections between the metaphysics of the relation between mind and matter, and quantum theory’s measurement problem. After discussing the metaphysics, especially the correct formulation of physicalism, I argue that two state-reduction approaches to quantum theory’s measurement problem hold some surprises for philosophers’ discussions of physicalism. Though both approaches are compatible with physicalism, they involve a very different conception of the physical, and of how the physical underpins the mental, from what most philosophers expect. And one approach exemplifies a a problem in the definition of physicalism which the metaphysical literature has discussed only in the abstract."
http://consc.net/mindpapers/8.3a

"Re: subjective and objective

"This statement is false."

resolves in hyperset theory as

A = {not A}

The above is a hyper-function. "not A" is not the set A itself (by definition!)
but it is a *function* of A. That is, it depends on the truth variable of A. The

contention that hyperset theories does not solve real paradoxes is false.
Hyperset theory was constructed _precisely_ to solve paradoxes. The motivation
was to resolve Russell's famous paradox: "Let X be the set of all sets which do
not contain itself." Obviously X is itself a set which does not contain itself
which in turn means that it contains itself. This is easily resoved in hyperset
theory.

>The Moebius strip metaphor (used as above) suggests that if you try to divide
>the world up into subjective and objective parts, everything will go to one
>side...that the objectivity property does not really globally discriminate.
>This suggests that the subjective world doesn't really exist as an independent
>entity...
...
If you take the time to read my "Illusory Illusions" I describe the structure of
social consciousness and show that it emerges from the ability to see second
order illusions. In hyperset theory a hologram is defined as follows:

Let M be a hyper-set such that M = {C1,C2,C3...Cn} where all components
themselves are hyper-sets such that Cr = {M}, r = {1,2,3...n}. M is then said to
be holographic.

By lucky chance this is precisely the structure of a mirrorhouse. Let M be the
mirrorhouse and each component, C, be a mirror in the mirrorhouse. Note that the
set is finite, but it creates an illusory infinite regress.

Onar."
http://cfpm.org/~bruce/prncyb-l/0306.html

Octonion Mirrorhouse:

Onar Aam noticed that:

2 facing mirrors can represent the complex numbers, with 1 imaginary i which has 1 algebraic generator i and 1 dimension (the 2 mirrors (in cross-section) are parallel and make a 1-dim line).

3 facing mirrors can represent the quaternions, with 3 imaginaries i,j,k which have 2 algebraic generators i,j (since k = ij) and 2 dimensions(the 3 mirrors (in cross-section) form a 2-dim triangle).

What about OCTONIONS, with their 480 different multiplications?
http://www.valdostamuseum.org/hamsmith/miroct.html
http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf

Consciousness: network-dynamical, informational and phenomenal aspects

"After an introduction to the multidisciplinary-posed consciousness problem, information dynamics in biophysical networks (neural, quantum, sub-cellular) underlying consciousness will be described in the first part. In the second part representational and intentional characteristics of informational, reflective and phenomenal aspects of consciousness and self-consciousness will be discussed. At the end, recent debate of qualia will be presented and commented. Thus, the aim of the article is to present a broad overview of multi-level aspects of consciousness."
http://www.goertzel.org/dynapsyc/1997/Perus.html

Toward an Understanding of the Preservation of Goals in Self-Modifying Cognitive Systems

"A new approach to thinking about the problem of “preservation of AI goal systems under repeated self-modification” (or, more compactly, “goal drift”) is presented, based on representing self-referential goals using hypersets and multi-objective optimization, and understanding self-modification of goals in terms of repeated iteration of mappings. The potential applicability of results from the theory of iterated random functions is discussed. Some heuristic conclusions are proposed regarding what kinds of concrete real-world objectives may best lend themselves to preservation under repeated self-modification. While the analysis presented is semi-rigorous at best, and highly preliminary, it does intuitively suggest that important humanly-desirable AI goals might plausibly be preserved under repeated self-modification. The practical severity of the problem of goal drift remains unresolved, but a set of conceptual and mathematical tools are proposed which may be
useful for more thoroughly addressing the problem."
http://www.goertzel.org/papers/PreservationOfGoals.pdf

"Polytely (from Greek roots meaning 'many goals') can be described as frequently, complex problem-solving situations characterized by the presence of not one, but several goals, endings.

Modern societies face an increasing incidence of various complex problems. In other words, the defining characteristics of our complex problems are a large number of variables (complexity) that interact in a nonlinear fashion (connectivity), changing over time (dynamic and time-dependent), and to achieve multiple goals (polytely)."
http://en.wikipedia.org/wiki/Polytely

"What is a complex problem?

In current theories of problem solving, a problem is conceptualized as composed of a given state, a desired goal state, and obstacles between given and goal state (Mayer
1992, p. 5). According to Anderson (2005), "any goal directed sequence of cognitive operations" is classified as an instance of problem solving. Other authors supplement this view by emphasizing that overcoming barriers toward a desired state may involve both cognitive and behavioral means, and that these means should imply novelty and originality. Operations triggered by a problem go beyond routine action and thinking (Frensch and Funke 1995a, b). A complex problem is said to occur when finding the solution demands a series of operations which can be characterized as follows (Dörner et al. 1983): Elements relevant to the solution process are large (complexity), highly interconnected (connectivity), and dynamically changing over time (dynamics). Neither structure nor dynamics are disclosed (intransparency). Finally, the goal structure is not as straight forward as suggested above: in
dealing with a complex problem, a person is confronted with a number of different goal facets to be weighted and coordinated—a polytelic situation.

One approach to complex situations falls under the label of "Naturalistic Decision-Making" (Klein 2008; Klein et al. 1993; Lipshitz et al. 2001; Zsambok and Klein 1997). According to Zsambok (1997, p. 5), "the study of NDM asks how experienced people working as individuals or groups in dynamic, uncertain, and often fast-paced environments, identify and assess their situation, make decisions, and take actions whose consequences are meaningful to them and to the larger organization in which they operate." The four markers of this research approach—(a) complex task in real-life setting, (b) experienced decisionmakers, (c) actual decision-making, and (d) situation awareness within an decision episode—are quite different to the approach frequently chosen in psychology using artificial decision situations with a focus on option selection, student subjects, and a rational standard (e.g., Kahneman et al. 1982).

Another approach falls under the label of "Complex Problem Solving" (Dörner 1980, 1996; Frensch and Funke 1995a, b; Funke 1991; Funke and Frensch 2007). According to Buchner (1995, p. 28), laboratory studies on problem solving such as chess or the famous Tower of Hanoi "… were criticized for being too simple, fully transparent, and static, whereas real-world economical, political, and technological problem situations were said to be complex, intransparent, and dynamic."
http://www.psychologie.uni-heidelberg.de/ae/allg/mitarb/jf/Funke_2010_CognProc.pdf

"Of course, one cannot reasonably define mind in terms of intelligence unless one has a definition of intelligence at hand. So, let us say that intelligence is the ability to optimize complex functions of complex environments. By a "complex environment," I mean an environment which is unpredictable on the level of details, but somewhat structurally predictable. And by a "complex function," I mean a function whose graph is unpredictable on the level of details, but somewhat structurally predictable.

The "complex function" involved in the definition of intelligence may be anything from finding a mate to getting something to eat to building a transistor or browsing through a library. When executing any of these tasks, a person has a certain goal, and wants to know what set of actions to take in order to achieve it. There are many different possible sets of actions -- each one, call it X, has a certain effectiveness at achieving the goal.

This effectiveness depends on the environment E, thus yielding an "effectiveness function" f(X,E). Given an environment E, the person wants to find X that maximizes f -- that is maximally effective at achieving the goal. But in reality, one is never given complete information about the environment E, either at present or in the future (or in the past, for that matter). So there are two interrelated problems: one must estimate E, and then find the optimal X based on this estimate.

If you have to optimize a function that depends on a changing environment, you'd better be able to predict at least roughly what that environment is going to do in the future. But on the other hand, if the environment is too predictable, it doesn't take much to optimize functions that depend on it. The interesting kind of environment is the kind that couples unpredictability on the level of state with rough predictability on the level of structure. That is: one cannot predict the future state well even from a good approximation to the present and recent past states, but one can predict the future structure well from a good approximation to the present and recent past structure.

This is the type of partial unpredictability meant in the formulation "Intelligence is the ability to optimize complex functions of partially unpredictable environments." In environments displaying this kind of unpredictability, prediction must proceed according to pattern recognition. An intelligent system must recognize patterns in the past, store them in memory, and construct a model of the future based on the assumption that some of these patterns will approximately continue into the future."
http://www.goertzel.org/books/logic/chapter_three.html

"Successful, ergonomic interventions in the area of cognitive tasks require a thorough understanding, not only of the demands of the work situation, but also of user strategies in performing cognitive tasks and of limitations in human cognition. In some cases, the artifacts or tools used to carry out a task may impose their own constraints and limitations (e.g., navigating through a large number of GUI screens); in fact tools co-determine the very nature of the task. In this sense, the analysis of cognitive tasks should examine both the interaction of users with their work setting and the user interaction with artifacts or tools; the latter is very important as modern artifacts (e.g., control panels, software, expert systems) become increasingly sophisticated. Emphasis lies on how to design human-machine interfaces and cognitive artifacts so that human performance is sustained in work environments where information may be unreliable, events may be difficult to predict, multiple simultaneous goals may be in conflict, and performance may be time constrained. Typical domains of application include process control rooms (chemical plants, air traffic), command and control centers, operating theaters and other supervisory control systems. Other types of situations, familiar but variable, are also considered as they entail performance of cognitive tasks: e.g. video recording, using an in-car GPS navigation system, etc."
http://en.wikipedia.org/wiki/Cognitive_ergonomics

Complex Systems, Anticipation, and Collaborative Planning:
http://urban.csuohio.edu/~sanda/papers/resilience.pdf

The Logic of Failure: Implications for Record-Keeping by Organizations
http://ambur.net/failure.pdf

The logic of failure: recognizing and avoiding error in complex situations

"Why do we make mistakes? Are there certain errors common to failure, whether in a complex enterprise or daily life? In this truly indispensable book, Dietrich Dörner identifies what he calls the “logic of failure”—certain tendencies in our patterns of thought that, while appropriate to an older, simpler world, prove disastrous for the complex world we live in now. Working with imaginative and often hilarious computer simulations, he analyzes the roots of catastrophe, showing city planners in the very act of creating gridlock and disaster, or public health authorities setting the scene for starvation. The Logic of Failure is a compass for intelligent planning and decision-making that can sharpen the skills of managers, policymakers and everyone involved in the daily challenge of getting from point A to point B."
http://tinyurl.com/4amjqc7

"An individual’s reality model can be right or wrong, complete or incomplete. As a rule it will be both incomplete and wrong, and one would do well to keep that probability in mind. But this is easier said than done. People are most inclined to insist that they are right when they are wrong and when they are beset by uncertainty. (It even happens that people prefer their incorrect hypotheses to correct ones and will fight tooth and nail rather than abandon an idea that is demonstrably false.) The ability to admit ignorance or mistaken assumptions is indeed a sign of wisdom, and most individuals in the thick of complex situations are not, or not yet, wise. (p. 42)"
http://testingjeff.wordpress.com/2007/04/05/the-logic-of-failure-part-1/

The Unanticipated Consequences of Technology:
http://www.scu.edu/ethics/publications/submitted/healy/consequences.html

"The PSI theory is a theory of human action regulation by psychologist Dietrich Doerner [1, 2]. It is an attempt to create a body-mind link for virtual agents. It aims at the integration of cognitive processes, emotions and motivation. This theory is unusual in cognitive psychology in that emotions are not explicitly defined but emerge from modulation of perception, action-selection, planning and memory access. Emotions become apparent when the agents interact with the environment and display expressive behavior, resulting in a configuration that resembles emotional episodes in biological agents. The agents react to the environment by forming memories, expectations and immediate evaluations. This short presentation is a good overview.

PSI agents possess a number of modulators and built-in motivators that lie within a range of intensities. These parameters combined to produce complex behavior that can be interpreted as being emotional. Additionally, by regulating these parameters, the agent is able to adapt itself to the different circumstances in the environment.This theory has been applied to different virtual agent simulations in different types of environments [3, 4, 5, 6, 7, 8] and has proven to be a promising theory for creation of biologically plausible agents."
http://www.macs.hw.ac.uk/~ruth/psi-refs.html

"The MicroPsi agent architecture describes the interaction of emotion, motivation and cognition of situated agents, mainly based on the Psi theory of Dietrich Dörner.
The Psi theory adresses emotion, perception, representation and bounded rationality, but being formulated within psychology, has had relatively little impact on the discussion of agents within computer science.

MicroPsi is a formulation of the original theory in a more abstract and formal way, at the same time enhancing it with additional concepts for memory, building of ontological categories and attention.

MicroPsi is a small step towards understanding how the mind works. The agents are virtual creatures that act according to motives that stem from physiological and cognitive urges. They build representations of their environment based on interaction. Their cognition is modulated according to perceived situations and performance, and thus they undergo emotional states.

The agent framework uses semantic networks, called node nets, that are a unified representation for control structures, plans, sensoric and active schemas, bayesian networks and neural nets. Thus it is possible to set up different kinds of agents on the same framework."
http://www.micropsi.org/

Modeling Uncertain Self-Referential Semantics with Infinite-Order Probabilities
http://www.goertzel.org/papers/InfiniteOrderProbabilities.pdf

Achieving Advanced Machine Consciousness via Artificial General Intelligence in Virtual Worlds
http://goertzel.org/NokiaMachineConsciousness.ppt

Nokia Workshop on Machine Consciousness 2008 Invited Speakers:
http://www.conscious-robots.com/images/stories/pdf/Nokia_Machine_Consciousness_InvitedSpeakers.pdf

Solvable set/hyperset contexts: II. A goal-driven unification algorithm for the blended case
http://wotan.liu.edu/docis/lib/tian/rclis/dbl/aaencc/(1999)9%253A4%253C293%253ASSCIAG%253E/www.dimi.uniud.it%252F~policrit%252FPapers%252Faaecc.pdf

Economic Goals, Economic Criteria and Economic Methods
http://notendur.hi.is/joner/eaps/ww_EC_Goals.htm

"Western-liberal discourses of power and the social practices associated with them are proving inadequate to the task of creating a peaceful, just, and sustainable social order. Having recognized this, progressive scholars and social reformers have begun articulating alternative discourses of power, along with alternative models of social practice. Together, these efforts can be interpreted as a project of discourse intervention – an effort to change our social
reality by altering the discourses that help constitute it. In order to advance this project, this paper deconstructs the dominant Western-liberal discourse of power, clarifies elements of an alternative discourse of power, and presents a case study of an alternative discourse community and the alternative models of social practice that it is constructing.
...
The predominant model of power in Western social theory – what I call the power as domination model – derives from the latter of these expressions. Although “power to” is the basis of models in the physical and natural sciences, “power over” highlights issues of social conflict, control, and coercion, which have been the primary focus of Western
social and political scientists. This power as domination paradigm traces back, either implicitly or explicitly, through the writings of diverse social and political theorists, from Machiavelli (1961) to Weber (1986) to Bourdieu (1994). It informed Hobbes’ (1968)
notion of a “war of all against all” as well as Marx and Engels’ (1967) theory of historical materialism. Indeed, as Giddens (1984, pp. 256-7) points out, this conflictual model of power underlies virtually all major traditions of Western social and political theory, from the left and the right.
...
Though the power as domination model has prevailed within Western social and political theory, alternative traditions do exist. Giddens (1984, pp. 15, 257), for example, defines power as “transformative capacity” or “the capacity to achieve outcomes” – a definition which is consistent with the “power to” locution introduced above. Though
Giddens frequently associates power with domination in his writings, he (Giddens, 1984, p. 257) recognizes that “power is not necessarily linked with conflict... and power is not inherently oppressive”. Indeed, there is power in cooperation among equals, and even when power is unequally distributed it can still be express in forms that are not
oppressive – as in the empowering relationship that can exist between a nurturing parent and child. Efforts to reconceptualize power along these lines have been most fully developed among feminist theorists, as well as some peace researchers and systems theorists.
...
Many systems theorists have articulated a theory of power that is remarkably similar to the feminist theory outlined above, yet derived from the relational complexities that characterize the study of dynamic systems. The fundamental premise of systems theory is that different types of complex systems – physical, biological, ecological, social,
and so forth – exhibit many structural and functional similarities. In systems terminology, complex systems are characterized by emergent properties that do not characterize any of their component parts in isolation. These emergent properties are made possible by the internal interdependence of a system’s parts or subsystems, which exist within complex networks of relationships with one another, characterized by mutual influence and interchange (for overviews of systems theory, refer to Bertalanffy, 1998; Englehart, 1995; Skyttner, 1996).

Complex dynamic systems can therefore be understood as functional unities. They perform various functions that their component parts or subsystems could not perform alone. For instance, a cell can metabolize energy while its component elements, in isolation, cannot. An organ can perform specialized physiological functions that its
component cells, in isolation, cannot. A living organism can reproduce itself while its component organs, in isolation, cannot. And a species can evolve while individual organisms, in isolation, cannot. Each of these functions is made possible through increasing levels of system complexity and integration.
...
The first step in formulating this schema is to recognize that the categories “power to” versus “power over” are, in fact, neither parallel nor mutually exclusive categories. “Power to”, in the broadest sense, denotes power as capacity. This is an overarching definition of the term power. “Power over” on the other hand, is a special case of this overarching concept. If we say that we have “power over” someone, this is simply another way of saying that we have the “power to exercise control over” that person. All possible expressions of “power over” can be understood in this way, as the power to exert control over others. For the purposes of delineating a more comprehensive schema of power relations, the first step is to recognize that “power over” is more accurately viewed as a sub-category of the more general power as capacity concept.

“Power over”, however, is not the only sub-category of power as capacity. Feminist and systems models point to other relations of power that do not entail exercising “power over” others. In keeping with these models, one could say that people who are acting in a cooperative or mutualistic manner in the pursuit of a common goal are exercising “power with” one another. For definitional purposes, this “power with” category will be referred to as mutualistic power relations – which constitutes another subcategory of the power as capacity concept.

The two categories identified above, associated with the phrases “power over” and “power with”, are still not parallel or mutually exclusive categories. To demonstrate this, consider the example of two equal adversaries that are exercising “power against” one another in a manner that results in mutual frustration, or a stalemate. Neither of these adversaries is exercising “power over” the other. Yet they are clearly not exercising their powers in a cooperative or mutualistic manner either. In this context, one could say that people either exercise “power with” one another in a mutualistic manner, or they exercise “power against” one another in an adversarial manner. For definitional purposes, this latter category will be referred to as adversarial power relations. Together, mutualistic power relations and adversarial power relations constitute two parallel and mutually exclusive relational categories of the more general concept power as capacity.
...
Likewise, power equality and power inequality have mirror counterpoints within the category of mutualistic power relations. In other words, two or more agents acting cooperatively can also be characterized by equal or unequal distributions of power. The consequences, however, are quite different when the relationships are mutualistic. Power equality within a mutualistic relationship results in the “mutual empowerment” of all cooperating agents. An example would be a buying or marketing cooperative created by a group of people with similar economic resources. On the other hand, power inequality within a mutualistic relationship results in the “assisted empowerment” of the less powerful agent(s) by the more powerful agent(s). An example would be the nurturing relationship between a parent and child, or the mentoring relationship between a teacher and student.
...
Hierarchy, as an organizational principle, can also be seen as a desirable form of
inequality under some circumstances. In a social or organizational context, hierarchy
refers to unequally structured power relations. Not surprisingly, many people equate
hierarchy with oppression. But this equation again conflates power inequality with
adversarial power relations. In the context of mutualistic power relations, hierarchy can be a valuable organizing principle. When any group of equal people is too large to effectively engage every member in every decision-making process, the group may benefit from delegating certain decision-making powers to smaller sub-groups. This consensually agreed upon inequality – or hierarchy – can empower a group to accomplish things it could otherwise not accomplish. In the process, it can also relieve the burden of ongoing decision-making responsibilities from large numbers of people who are thereby freed to devote their time and energy to other productive pursuits that can benefit the entire group.

Though this schema illustrates that hierarchy cannot automatically be equated with
oppression, it also cautions that hierarchy cannot automatically be equated with
empowerment, as some conventional functionalist theorists conversely assume (e.g.,
Parsons, 1986). Within competitive or adversarial power relationships, which are
common in contemporary societies, hierarchy does lead to oppression, exploitation, and other undesirable outcomes.

Even as this schema reveals the positive and negative dimensions of power inequality, it also reveals the positive and negative dimensions of power equality. While power equality is clearly a desirable condition in many mutualistic power relations, where it leads to mutual empowerment, it can be highly dysfunctional in many adversarial power relations, where it leads to mutual frustration. Consider, for instance, the partisan gridlock that characterizes so much contemporary political decision making in Western-liberal democracies. Such gridlock not only disempowers equally powerful political parties, it disempowers the entire public by rendering its only means of collective decision largely dysfunctional.
...
Yet, a skeptic might ask, if contest models of social organization are not inevitable
expressions of human nature, what are the alternatives? How can a complex modern
society construct governing institutions based on less competitive assumptions about
human nature? In a previous iteration of this article, the response of one anonymous
reviewer was particularly illuminating in this regard. The reviewer wrote that this article made

"some wild assertions about how Western-liberal societies have structured their
political systems in terms of contestation. What is the contrast, modern nonliberal
societies? Are these run in terms of “power with”? But of course all
societies require cooperation. A political party is a coalition of interests.
Parliaments, courts, driving down the road, all require cooperation even if they
also involve conflicts. There have been many societies in the past with few
conflicts, but generally speaking these societies were hunter-gatherer societies
with low populations, and since there was no accumulation (hunter-gatherer’s
only pick enough food for a few days at a time) there was no room for nonproducers.
Only with agriculture can we develop elites, bureaucracy and other
non-producers of the stuff of life who can come to dominate others. Do the
authors want us to return to hunter-gatherer lifestyles?"

This line of skepticism has become quite familiar to most scholars who question
the prevailing assumptions that underlie the Western-liberal social order. To this line of skepticism I offer a three-part response: (1) I agree that all societies are clearly characterized by both competition and cooperation. In fact, the purpose of the schema developed in this article is to keep the latter of these characteristics from becoming obscured within our discourses of power – which is the perennial tendency within Western-liberal societies, especially among social theorists. (2) I am not advocating a return to hunter gatherer lifestyles. Not only do I believe this would be impossible, I do not believe it would be desirable. (3) I am not suggesting that conflict can or will ever disappear in complex modern societies (nor do I believe, for that matter, that it was absent in hunter-gatherer societies). What I am suggesting is that conflict should not continue to serve indefinitely as the normative principle upon which we construct our governing institutions and conduct our affairs, as it currently does in Western-liberal societies, where democracy is confused with partisanship, where justice tends to be confused with legal contestation, and where economy is confused with competitive material acquisition. There is a significant difference between recognizing the occurrence of conflict in human affairs, on one hand, and prescribing conflict as the organizing principle for our most important social institutions, on the other. The latter, I assert, cultivates unnecessary levels of conflict.
...
‎According to Bahá'ís, contemporary world conditions are pressing humanity
toward an age of global integration that will require new models of social organization and new levels of maturity in human interactions (Bahá'í World Centre, 2001). Among other things, Bahá'ís believe this will require a rethinking of contemporary attitudes toward power."
http://www.gmu.edu/programs/icar/ijps/vol10_1/Karlberg_101IJPS.pdf

Fractal Prophets:
http://www.winkiepratney.com/_files/pdf/tracts/FractalProphets.pdf

Why Bayesian Rationality Is Empty, Perfect Rationality Doesn’t Exist, Ecological Rationality Is Too Simple, and Critical Rationality Does the Job

"Economists claim that principles of rationality are normative principles. Nevertheless, they go on to explain why it is in a person’s own interest to be rational. If this were true, being rational itself would be a means to an end, and rationality could be interpreted in a non-normative or naturalistic way. The alternative is not attractive: if the only argument in favor of principles of rationality were their intrinsic appeal, a commitment to rationality would be irrational, making the notion of rationality self-defeating. A comprehensive conception of rationality should recommend itself: it should be rational to be rational. Moreover, since rational action requires rational beliefs concerning means-ends relations, a naturalistic conception of rationality has to cover rational belief formation including the belief that it is rational to be rational. The paper considers four conceptions of rationality and asks whether they can deliver the goods: Bayesianism, perfect rationality (just in case that it differs from Bayesianism), ecological rationality (as a version of bounded rationality), and critical rationality, the conception of rationality characterizing critical rationalism. The answer is summarized in the paper’s title.
...
Classical inductivism, the philosophical precursor of Bayesianism, assumes
that it is rational to believe in non-deductive, or inductive, consequences of rational (specifically, observational) beliefs. However, non-deductive arguments
are equivalent to deductive arguments with additional premises. In the context
of inductivism, these additional premises are called inductive principles.
Given that inductive principles are not themselves inductive or deductive consequences of rational beliefs, classical inductivism is necessarily incomplete: it
contains no rule that allows for rational belief in inductive principles. Consequently, it adds nothing to the first two rules.

The first two rules, however, are not sufficient for solving the problem of
induction. According to them, it is never rational to believe any proposition
about future events or, more generally, events that have not yet been observed.
Hence, if one restricts rational belief formation to the application of these two
rules, irrationalism cannot be avoided. A different conclusion requires adding at
least one further rule. This rule must be useful and it must recommend itself: it
should be rational to be rational.
...
In my view, critical rationality can also be viewed as defining a process for reaching, by criticism, a reflective equilibrium where all beliefs are rational. There are several other versions of the idea of a reflective equilibrium: Rawls’ and Goodman’s version are discussed by Hahn 2004. I would add the Savage-Binmore variant of Bayesianism where one should massage the prior until one can accept its hypothetical consequences (Binmore 2009). All these other ideas of reflective equilibrium seem to lack a good description of the dynamics leading to equilibrium. See, in contrast, Kliemt 2009, 37–41; Kliemt uses theory absorption as a critical argument in order to derive the Nash solution as a reflective equilibrium in game theory.
...
The extension of critical rationality from belief formation to choice of actions
is simple in principle, although the details are not easily worked out.15 The
general idea is that a proposal for the solution of a decision problem is an implicit
hypothesis, namely, that implementing the proposal achieves the decision
maker’s goal. It is rational, then, to implement a proposal if and only if it is
rational to believe in the corresponding hypothesis.

In practical decision-making, different solution proposals to the same problem
may be rationally implemented since there may be different courses of action
achieving the given goals.
...
Critical rationalism shows more explicit continuity with evolutionary theory
than ecological rationality. It is firmly embedded in evolutionary epistemology,
which emphasizes the continuity between all kinds of cognitive processes including
creativity in science and elsewhere (see Bradie and Harms 2008 for a
survey). This leads to a slightly different emphasis in decision theory. For instance, the choice between decision heuristics can be viewed as a process where
several proposals are generated blindly and then criticized until some proposal
survives criticism or time runs out. This is a continuation of the evolutionary
process, where organisms (or genes) can be viewed as embodying hypotheses or
decision rules that are ‘criticized’ by nature whenever they fail to reproduce (or
to reproduce at a higher rate than their competitors).

Critical rationality, then, is not only relevant in the philosophy of science. It
should also be viewed as the solution to the problem of rational decision-making
and replace Bayesianism, which is not satisfactory in either field. Critical rationality shares the emphasis on evolutionary aspects with ecological rationality,
which has been proposed as an alternative to Bayesianism, or perfect rationality,
in economics. In contrast to critical rationality, however, ecological rationality
cannot serve as a comprehensive conception of rationality."
http://www.frankfurt-school-verlag.de/rmm/downloads/004_albert.pdf

"As the power of Bayesian techniques have become more fully realized, the field of artificial intelligence (AI) has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering."
http://books.google.com/books?id=I5JG767MryAC&dq=reflective+equilibrium+bayesian&source=gbs_navlinks_s

Diagonalization and self-reference

"This book presents a systematic, unified treatment of fixed points as they occur in Godel's incompleteness proofs, recursion theory, combinatory logic, semantics, and metamathematics."
http://www.amazon.com/Diagonalization-Self-Reference-Oxford-Logic-Guides/dp/0198534507

Non-Well-Founded Sets (Aczel, 1988):
http://standish.stanford.edu/pdf/00000056.pdf

Anti-Foundation and Self-Reference

This note argues against Barwise and Etchemendy's claim that their semantics for self-reference requires use of Aczel's anti-foundational set theory, AFA, and that any alternative would involve us in complexities of considerable magnitude, ones irrelevant to the task at hand (The Liar, p. 35).Switching from ZF to AFA neither adds nor precludes any isomorphism types of sets. So it makes no difference to ordinary mathematics. I argue against the author's claim that a certain kind of naturalness nevertheless makes AFA preferable to ZF for their purposes. I cast their semantics in a natural, isomorphism invariant form with self-reference as a fixed point property for propositional operators. Independent of the particulars of any set theory, this form is somewhat simpler than theirs and easier to adapt to other theories of self-reference.
http://www.cwru.edu/artsci/phil/McAntifoundation.pdf

Issues in Commonsense Set Theory

The success of set theory as a foundation for mathematics inspires its use in artificial intelligence, particularly in commonsense reasoning. In this survey, we briefly review classical set theory from an AI perspective, and then consider alternative set theories. Desirable properties of a possible commonsense set theory are investigated, treating different aspects like cumulative hierarchy, self-reference, cardinality, etc. Assorted examples from the ground-breaking research on the subject are also given.
http://view.samurajdata.se/psview.php?id=20cc391f&page=1&size=full





"Diagram 13: The above diagram illustrates the relationship of primary and secondary telic recursion, with the latter “embedded in” or expressed in terms of the former. The large circles and arrows represent universal laws (distributed syntax) engaged in telic feedback with the initial state of spacetime (initial mass-energy distribution), while the small circles and arrows represent telic feedback between localized contingent aspects of syntax and state via conspansion. The primary stage maximizes global generalized utility on an ad hoc basis as local telors freely and independently maximize their local utility functions. The primary-stage counterparts of inner expansion and requantization are called coinversion and incoversion. It is by virtue of telic recursion that the SCSPL universe can be described as its own self-simulative, self-actualizative “quantum protocomputer”."
...
The general nature of this model can be glimpsed merely by considering the tautological reflexivity of the term “self-evident”. Anything that is self evident proves (or evidences) itself, and any construct that is implicated in its own proof is tautological. Indeed, insofar as observers are real, perception amounts to reality tautologically perceiving itself. The logical ramifications of this statement are developed in the supertautological CTMU, according to which the model in question coincides logically and geometrically, syntactically and informationally, with the process of generating the model, i.e. with generalized cognition and perception. Information thus coincides with information transduction, and reality is a tautological self-interpretative process evolving through SCSPL grammar.

The CTMU has a meta-Darwinian message: the universe evolves by hological self-replication and self-selection. Furthermore, because the universe is natural, its self-selection amounts to a cosmic form of natural selection. But by the nature of this selection process, it also bears description as intelligent self-design (the universe is “intelligent” because this is precisely what it must be in order to solve the problem of self-selection, the master-problem in terms of which all lesser problems are necessarily formulated). This is unsurprising, for intelligence itself is a natural phenomenon that could never have emerged in humans and animals were it not already a latent property of the medium of emergence. An object does not displace its medium, but embodies it and thus serves as an expression of its underlying syntactic properties. What is far more surprising, and far more disappointing, is the ideological conflict to which this has led. It seems that one group likes the term “intelligent” but is indifferent or hostile to the term “natural”, while the other likes “natural” but abhors “intelligent”. In some strange way, the whole controversy seems to hinge on terminology.

Of course, it can be credibly argued that the argument actually goes far deeper than semantics… that there are substantive differences between the two positions. For example, some proponents of the radical Darwinian version of natural selection insist on randomness rather than design as an explanation for how new mutations are generated prior to the restrictive action of natural selection itself. But this is untenable, for in any traditional scientific context, “randomness” is synonymous with “indeterminacy” or “acausality”, and when all is said and done, acausality means just what it always has: magic. That is, something which exists without external or intrinsic cause has been selected for and brought into existence by nothing at all of a causal nature, and is thus the sort of something-from-nothing proposition favored, usually through voluntary suspension of disbelief, by frequenters of magic shows.

Inexplicably, some of those taking this position nevertheless accuse of magical thinking anyone proposing to introduce an element of teleological volition to fill the causal gap. Such parties might object that by “randomness”, they mean not acausality but merely causal ignorance. However, if by taking this position they mean to belatedly invoke causality, then they are initiating a causal regress. Such a regress can take one of three forms: it can be infinite and open, it can terminate at a Prime Mover which itself has no causal explanation, or it can form some sort of closed cycle doubling as Prime Mover and that which is moved. But a Prime Mover has seemingly been ruled out by assumption, and an infinite open regress can be ruled out because its lack of a stable recursive syntax would make it impossible to form stable informational boundaries in terms of which to perceive and conceive of reality.

What about the cyclical solution? If one uses laws to explain states, then one is obliged to explain the laws themselves. Standard scientific methodology requires that natural laws be defined on observations of state. If it is then claimed that all states are by definition caused by natural laws, then this constitutes a circularity necessarily devolving to a mutual definition of law and state. If it is then objected that this circularity characterizes only the process of science, but not the objective universe that science studies, and that laws in fact have absolute priority over states, then the laws themselves require an explanation by something other than state. But this would effectively rule out the only remaining alternative, namely the closed-cycle configuration, and we would again arrive at…magic.

It follows that the inherently subjective process of science cannot ultimately be separated from the objective universe; the universe must be self-defining by cross-refinement of syntax and state. This brings us back to the CTMU, which says that the universe and everything in it ultimately evolves by self-multiplexing and self-selection. In the CTMU, design and selection, generative and restrictive sides of the same coin, are dual concepts associated with the alternating stages of conspansion. The self-selection of reality is inextricably coupled to self-design, and it is this two-phase process that results in nature. Biological evolution is simply a reflection of the evolution of reality itself, a process of telic recursion mirroring that of the universe as a whole. Thus, when computations of evolutionary probability are regressively extrapolated to the distributed instant of creation, they inevitably arrive at a logical and therefore meaningful foundation."
Image/Quote: http://www.ctmu.net/

"In keeping with its clear teleological import, the Telic Principle is not without what might be described as theological ramifications. For example, certain properties of the reflexive, self-contained language of reality – that it is syntactically self-distributed, self-reading, and coherently self-configuring and self-processing – respectively correspond to the traditional theological properties omnipresence, omniscience and omnipotence. While the kind of theology that this entails neither requires nor supports the intercession of any “supernatural” being external to the real universe itself, it does support the existence of a supraphysical being (the SCSPL global operator-designer) capable of bringing more to bear on localized physical contexts than meets the casual eye. And because the physical (directly observable) part of reality is logically inadequate to explain its own genesis, maintenance, evolution or consistency, it alone is incapable of properly containing the being in question."[1]

"Question: Or, alternatively, does God instantaneously or non-spatiotemporally— completely, consistently, and comprehensively—reconfigure and reconstitute reality’s info-cognitive objects and relations?

Answer: The GOD, or primary teleological operator, is self-distributed at points of conspansion. This means that SCSPL evolves through its coherent grammatical processors, which are themselves generated in a background-free way by one-to-many endomorphism. The teleo-grammatic functionality of these processors is simply a localized "internal extension" of this one-to-many endomorphism; in short, conspansive spacetime ensures consistency by atemporally embedding the future in the past. Where local structure conspansively mirrors global structure, and global distributed processing "carries" local processing, causal inconsistencies cannot arise; because the telic binding process occurs in a spacetime medium consisting of that which has already been bound, consistency is structurally enforced."

"Question: If God does in fact continuously create reality on a global level such that all prior structure must be relativized and reconfigured, is there any room for free-will?

Answer: Yes, but we need to understand that free will is stratified. As a matter of ontological necessity, God, being ultimately identified with UBT, has "free will" on the teleological level...i.e., has a stratified choice function with many levels of coherence, up to the global level (which can take all lower levels as parameters). Because SCSPL local processing necessarily mirrors global processing - there is no other form which it can take - secondary telors also possess free will. In the CTMU, free will equates to self-determinacy, which characterizes a closed stratified grammar with syntactic and telic-recursive levels; SCSPL telors cumulatively bind the infocognitive potential of the ontic groundstate on these levels as it is progressively "exposed" at the constant distributed rate of conspansion." [2], [3]

"So the CTMU is essentially a theory of the relationship between mind and reality. In explaining this relationship, the CTMU shows that reality possesses a complex property akin to self-awareness. That is, just as the mind is real, reality is in some respects like a mind. But when we attempt to answer the obvious question "whose mind?", the answer turns out to be a mathematical and scientific definition of God. This implies that we all exist in what can be called "the Mind of God", and that our individual minds are parts of God's Mind. They are not as powerful as God's Mind, for they are only parts thereof; yet, they are directly connected to the greatest source of knowledge and power that exists. This connection of our minds to the Mind of God, which is like the connection of parts to a whole, is what we sometimes call the soul or spirit, and it is the most crucial and essential part of being human."
http://ctmucommunity.org/wiki/God


Image: http://lawsofform.wordpress.com/

Remarks on Turing and Spencer-Brown

"Computation is holographic. Information processing is a formal operation made abstract only by a reduction in the number of free variables, a projective recording which analyzes from all angles the entropy or information contained in the space. Thus, basing my results partly on Hooft’s holographic conjecture for physics (regarding the equivalence of string theory and quantum theory,) and by extending Spencer-Brown’s work on algebras of distinction (developed in his Laws of Form,) I will sketch the outlines of a new theory of universal computation, based not on system-cybernetic models but on holographic transformations (encoding and projection, or more precisely, fractal differentiation and homogeneous integration.)

Hooft’s conjecture allows us to extend the Laws of Form with an “interface” model where computation doesn’t require an observer, only the potentiality of being observed. In other words, all we need is the construction of a interface (positive feedback system, i.e., an iterative calculation or mutual holographic projection) in order to process information. Light itself can be thought of as encoding information, and in particular, electromagnetic waves form a necessary part of holographically recorded information. In other words, to operate in a formal system is to derive information only from interfaces, simpler than but in some way equivalent to the “real” objects.

This abstraction is at the heart of Spencer-Brown’s The Laws of Form which describes the fundamental features of any formal system. These formal operations are also representable as holographic operations. Furthermore, since a description of the holographic structure of a process is equivalent to a description of its original form, we ought to be able to understand computation exclusively in terms of holographic operations. We can represent a region of space by a projection onto a holographic surface. The key point is that we lose a dimension, but owing to a fractal mapping, we lose no information. This projective holographic process can continue until we reach a representation (reality?) with no dimensions at all, i.e., pure or manifest information itself (a holomorphic field.) This stepwise or iterative movement towards pure information is holographic in essence.

Claude Shannon has defined information processing as the conversion of latent, implicit information into manifest information; we will add, into its (dimension-zero) holographic representation. Information processing occurs through holographic projection and encoding: any presented multiplicity can be converted into pure information through some n-dimensional holographic “cascade”. An observer distinguishes spaces, an interface encodes these distinctions, thereby extruding the holographic sub-structure of the universe (the “virtual” information processing occuring in distinguished regions of space.) It is in this context that Spencer-Brown provides a unique insight to cybernetics with his analysis of the operation of distinction. In combination with the holographic paradigm for the physical structure of the universe, a correspondingly extended algebra of distinction for the structure of formal systems provides the basis for our claim that information processing can be thought of, at the limit of abstraction, as consisting fundamentally of holographic operations."
http://fractalontology.files.wordpress.com/2007/12/universal-computation-and-the-laws-of-form.pdf

"Non-locality and the non-local Quantum Hologram provide the only testable mechanism discovered to-date which offer a possible solution to the host of enigmatic observations and data associated with consciousness and such consciousness phenomena. Schempp (1992) has successfully validated the concept of recovery and utilization of non-local quantum information in the case of functional Magnetic Resonance Imaging (fMRI) using quantum holography. Marcer (1995) has made compelling arguments that a number of other chemical and electromagnetic processes in common use have a deeper quantum explanation that is not revealed by the classical interpretation of these processes. Hammeroff (1994) and Penrose have presented experimental data on microtubules in the brain supporting quantum processes. The absorption/re-emission phenomena associated with all matter is well recognized. That such re-emissions are sufficiently coherent to be considered a source of information about the object is due to the theoretical and experimental work of Schempp and Marcer, based upon the transactional interpretation of Quantum Mechanics of Cramer (1986); the Berry geometric phase analysis of information (Berry, 1988; Anandan, 1992); and the ability of quantum phase information to be recovered and utilized (Resta, 1997). The mathematical formalism appropriate to these analyses is consistent with standard quantum mechanical formalism and is defined by means of the harmonic analysis on the Heisenberg nilpotent Lie group G, algebra g, and nilmanifold (see Schempp (1986) for a full mathematical treatment).
The information carried by a Quantum Hologram encodes the complete event history of the object with respect to its 3-dimensional environment. It evolves over time to provide an encoded non-local record of the "experience" of the object in the 4-dimensional space/time of the object as to its journey in space/time and the quantum states visited. The question of the brain's ability -- as a massively parallel quantum processor -- to decode this information is addressed by Marcer and Schempp in "Model of the Neuron Working by Quantum Holography" (1997) and "The Brain as a Conscious System" (1998)."
http://www.stealthskater.com/Documents/ISSO_5.pdf

What is Consciousness? An essay on the relativistic quantum holographic model of the brain/mind, working by phase conjugate adaptive resonance
http://tinyurl.com/46hpqy5

Quantum Holography and Neurocomputer Architectures:
...
Snowflake fractals, i.e., self-similar planar von Koch curves admitting locally the symmetry groups[...]are called holographic fractals or holofractals, for short.
http://tinyurl.com/4kw4oww

Analog VLSI Network Models, Cortical Linking Neural Network Models, and Quantum Holographic Neural Technology:
http://tinyurl.com/4pedslr

How To Build Holographic Spacetime

"Is it possible to render holographic spacetime starting from simple discrete building blocks? I will demonstrate that not only is the answer yes, but also that to do so, surprisingly simple building blocks suffice. Holographic 2D toy models result that you can play with and investigate in terms of De-Bruijn-type sequences, dual descriptions and path sums."
http://fqxi.org/data/essay-contest-files/Koelman_Holomata03.pdf

"2.5 Remote Viewing

The case is somewhat different when the object of interest is not in the immediate vicinity of the percipient so that space/time information is unavailable for decoding non-local information. The phenomenon of remote viewing has been researched extensively by Puthoff and Targ (1976) of Stanford Research Institute and successfully utilized by intelligence agencies in the United States (Puthoff, 1996), and likely elsewhere, ever since. For the purpose of this paper, the questions of interest in this case are: "What is the reference signal used to decode the quantum holographic information in the absence of classical space/time signals; and how is pcar established by the percipient?" Experimental protocols for remote viewing normally provide clues to the location of the object such as a description, a picture, or location by latitude and longitude, that is to say, an icon representing the object. These clues seem to be sufficient for the percipient to establish a resonance with the object. Normal space/time information (visual, acoustic, tactile) about the object is not being directly perceived by the percipient, nor does the object usually appear at its physical location in space/time like a photograph or map in the mind. Rather, the information is perceived and presented as internal information and the percipient must associate the perceptions with his/her internal data base of experience in order to cognize and to describe the object's perceived attributes.

In the case of complex objects being remotely viewed, the perceived information is seldom so unambiguous as to be instantly recognizable as correct. Sketches, metaphors and analogies are usually employed to cognize and communicate the non-local information. A considerable amount of training, team work and experience are necessary to reliably and correctly extract complex non-local information from a distant location. The information appears to the percipient as sketchy, often dream-like, and wispy, subtle impressions of the remote reality. Very skilled individuals may report the internal information as frequently vivid, clear and unambiguous. The remote viewing information, being strictly non-local, and in this hypothesis, the information perceived by quantum holography, is missing the normal space/time components of information necessary to completely specify the object. It has been demonstrated that this intuitive mode of perception can be trained in most individuals. Perhaps additional training and greater acceptance of this capability will allow percipients to develop greater detail, accuracy and reliability in their skill. In principle, training will not only increase the skill and accuracy, but should cause the appropriate neural circuitry to become more robust as well.

In the absence of space/time (electromagnetic) signals to establish the pcar condition and to provide a basis for decoding the quantum hologram, an icon representing an object seems to be sufficient to allow the brain to focus on the object and to establish the pcar condition. However, a reference signal is also required to provide decoding of the encoded holographic phase dependent information. Marcer (1998) has established, using Huygen's principle of waves and secondary sources, that any waves reverberating through the universe remain coherent with the waves at the source, and are thus sufficient to serve as the reference to decode the holographic information of any quantum hologram emanating from remote locations.
...
The case for mind/mind and mind/matter interactions is impressively well documented over many decades as studies in phenomenology, with staggering probabilities against chance having produced the results. The discovery of the non-local quantum hologram, which is theoretically sound and experimentally validated in at least one application, the fMRI, is sufficient to postulate that the quantum hologram is a solution to the foregoing enigma. Further, recognition that the quantum hologram is a macro-scale, non-local, information structure described by the standard formalism of quantum mechanics extends quantum mechanics to all physical objects including DNA molecules, organic cells, organs, brains and bodies. The discovery of a solution which seems to resolve so many phenomena, and also that points to the fact that in many instances classical theory is incomplete without including the subtle non-local components involved, suggests a major paradigm change must be forthcoming.

The papers already published by Marcer and Schempp proposing a learning model both for DNA and prokaryote cells, which uses quantum holography, suggests that evolution in general is driven by a learning feedback loop with the environment, rather than by random mutations. This solution to biological evolution was proposed by Lamarck in 1809 but discarded for the mechanistic solution of random mutations by the colleagues of Darwin.

The fact that non-local correlations and non-local quantum information can now be seen as ubiquitous in nature leads to the conclusions that the quantum hologram can properly be labeled as "nature's mind" and that the intuitive function we label in humans as the "sixth sense" should properly be called the "first sense". The perception of non-local information certainly preceded and helped to shape, through learning feedback, the sensory systems that evolved in planetary environments, and which we currently label as the five normal senses.
We must conclude that evolved, complex organisms which can form an intent can produce and often do produce non-local causal effects associated with that intent. Further, that attention alone produces coherence in nature that in some measure reduces randomness.

Finally, I conclude that the cited experiments and current understanding of non-locality in nature is sufficient to postulate that non-locality is the antecedent attribute of energy and matter which permits perception and is the root of the consciousness which manifests in the evolved organisms existing in three dimensional reality."
http://www.quantrek.org/Articles_of_interest/Natures_mind.htm

"Courtney Brown, Ph.D., presenting "Evidence of Multiple Universes" at the University of Virginia during the annual meetings of the Society for Scientific Exploration in May 2009. Dr. Brown uses remote-viewing data collected with a strict scientific protocol to show that multiple realities actually do exist. It is possible to isolate a single reality for viewing using a particular experimental design."
http://www.youtube.com/watch?v=zO9UYsqPlTc

"Holistic awareness, or self-reference, emerges from an inward necessity which is satisfied as information is chosen from the context in which a system evolves. That is why experience/perception is fundamental in the development of all matter, but especially in sentient matter; because we need it in order to be able to choose. This is why Nature (self-organized matter) transformed into brains with eyes; to more efficiently carry out this self-reference function. How could matter get organized if it could not observe itself? Matter, in order to evolve, had to communicate in any way naturally possible (e.g., surface vibrations, air vibrations, EM radiation and non-local communication). Biological organisms evolved to use light to their benefit very slowly. As we already know, it took Nature billions of years (from the Precambrian to the Cambrian era) just to develop eyesight.

Human consciousness evolved from the same holistic awareness property all matter has shown to possess. The evidence suggests that the objective Universe was here before human observers and that wave function collapse is an old function of matter which, through a self-reference mechanism inherent in self-animated matter, evolved to what our consciousness is today."
http://doesgodexist.multiply.com/journal/item/639/Is_Consciousness_Fundamental

"In the context of language, self-reference is used to denote a statement that refers to itself or its own referent. The most famous example of a self-referential sentence is the liar sentence: "This sentence is not true." Self-reference is often used in a broader context as well. For instance, a picture could be considered self-referential if it contains a copy of itself (see the animated image above); and a piece of literature could be considered self-referential if it includes a reference to the work itself. In philosophy, self-reference is primarily studied in the context of language. Self-reference within language is not only a subject of philosophy, but also a field of individual interest in mathematics and computer science, in particular in relation to the foundations of these sciences.

The philosophical interest in self-reference is to a large extent centered around the paradoxes. A paradox is a seemingly sound piece of reasoning based on apparently true assumptions that leads to a contradiction. The liar sentence considered above leads to a contradiction when we try to determine whether it is true or not. If we assume the sentence to be true, then what it states must be the case, that is, it cannot be true. If, on the other hand, we assume it not to be true, then what it states is actually the case, and thus it must be true. In either case we are led to a contradiction. Since the contradiction was obtained by a seemingly sound piece of reasoning based on apparently true assumptions, it qualifies as a paradox. It is known as the liar paradox.

Most paradoxes of self-reference may be categorised as either semantic, set-theoretic or epistemic. The semantic paradoxes, like the liar paradox, are primarily relevant to theories of truth. The set-theoretic paradoxes are relevant to the foundations of mathematics, and the epistemic paradoxes are relevant to epistemology. Even though these paradoxes are different in the subject matter they relate to, they share the same underlying structure, and may often be tackled using the same mathematical means.

In the present entry, we will first introduce a number of the most well-known paradoxes of self-reference, and discuss their common underlying structure. Subsequently, we will discuss the profound consequences that these paradoxes have on a number of different areas: theories of truth, set theory, epistemology, foundations of mathematics, computability. Finally, we will present the most prominent approaches to solving the paradoxes."
http://plato.stanford.edu/entries/self-reference/

DIAGONAL ARGUMENTS AND CARTESIAN CLOSED CATEGORIES

"The original aim of this article was to demystify the incompleteness theorem of Gödel and the truth-definition theory of Tarski by showing that both are consequences of some very simple algebra in the cartesian-closed setting. It was always hard for many to comprehend how Cantor’s mathematical theorem could be re-christened as a “paradox” by Russell and how Gödel's theorem could be so often declared to be the most significant result of the 20th century. There was always the suspicion among scientists that such extra-mathematical publicity movements concealed an agenda for re-establishing belief as a substitute for science. Now, one hundred years after Gödel's birth, the organized attempts to harness his great mathematical work to such an agenda have become explicit.
...
The similarity between the famous arguments of Cantor, Russell, Gödel and Tarski is well-known, and suggests that these arguments should all be special cases of a single theorem about a suitable kind of abstract structure. We offer here a fixed point theorem in cartesian closed categories which seems to play this role. Cartesian closed categories seem also to serve as a common abstraction of type theory and propositional logic, but the author’s discussion at the Seattle conference of the development of that observation will be in part described elsewhere [“Adjointness in Foundations”, to appear in Dialectica, and “Equality in Hyperdoctrines and the Comprehension Schema as an Adjoint Functor”, to appear in the Proceedings of the AMS Symposium on Applications of Category theory]."
http://138.73.27.39/tac/reprints/articles/15/tr15.pdf

A UNIVERSAL APPROACH TO SELF-REFERENTIAL PARADOXES,INCOMPLETENESS AND FIXED POINTS

Following F. William Lawvere, we show that many self-referential paradoxes,incompleteness theorems and fixed point theorems fall out of the same simple scheme. Wedemonstrate these similarities by showing how this simple scheme encompasses the semantic paradoxes, and how they arise as diagonal arguments and fixed point theorems in logic,computability theory, complexity theory and formal language theory.
http://tinyurl.com/6hbh5r5

Conceptual mathematics: a first introduction to categories

"The idea of a "category"--a sort of mathematical universe--has brought about a remarkable unification and simplification of mathematics. Written by two of the best-known names in categorical logic, Conceptual Mathematics is the first book to apply categories to the most elementary mathematics. It thus serves two purposes: first, to provide a key to mathematics for the general reader or beginning student; and second, to furnish an easy introduction to categories for computer scientists, logicians, physicists, and linguists who want to gain some familiarity with the categorical method without initially committing themselves to extended study."
http://books.google.com/books?id=o1tHw4W5MZQC&dq=lawvere+diagonalization&source=gbs_navlinks_s

From Lawvere to Brandenburger-Keisler: interactive forms of diagonalization and self-reference

"We analyze the Brandenburger-Keisler paradox in epistemic game theory, which is a `two-person version of Russell's paradox'. Our aim is to understand how it relates to standard one-person arguments, and why the `believes-assumes' modality used in the argument arises. We recast it as a fixpoint result, which can be carried out in any regular category, and show how it can be reduced to a relational form of the one-person diagonal argument due to Lawvere. We give a compositional account, which leads to simple multi-agent generalizations. We also outline a general approach to the construction of assumption complete models."
http://arxiv.org/abs/1006.0992

Diagonals, self-reference and the edge of consistency: classical and quantum

"Diagonal arguments lie at the root of many fundamental phenomena in the foundations of logic and mathematics - and maybe physics too! In a classic application of category theory to foundations from 1969, Lawvere extracted a simple general argument in an abstract setting which lies at the basis of a remarkable range of results. This clearly exposes the role of a diagonal, i.e. a *copying operation*, in all these results. We shall review some of this material, and give a generalized version of Lawvere's result in the setting of monoidal categories. The same ideas can also be used to give `no go' results, showing that various structural features cannot be combined on pain of a collapse of the structure. This is in the same spirit as the categorical version of No-Cloning and similar results recently obtained by the author. In Computer Science, these show the inconsistency of structures used to model recursion and other computational phenomena with some basic forms of logical structure.In the quantum realm, because of the absence of diagonals (cloning), the situation is less clear-cut. We shall show that various kind of reflexive or self-referential structures do arise in the setting of (infinite dimensional) Hilbert space."
...
The Conway-Kochen-Specker Theorems

Over the past decade, there has been renewed interest in the Kochen-Specker Theorem from 1967, stemming from attempts to prove the existence of free will from it (Conway-Kochen), "nullify" it (Pitowsky, Meyer, Clifton, Kent), generalize it to von Neumann algebras (Doering), or reformulate it in topos theory (Butterfield-Isham and Heunen-Landsman-Spitters). This talk is a survey of these developments and their interrelations.
http://www.perimeterinstitute.ca/Events/Categories,_Quanta,_Concepts/Abstracts/

THE CONWAY-KOCHEN-SPECKER THEOREMS AND THE NATURE OF REALITY

"Princeton mathematicians John Conway and Simon Kochen have recently proved a so-called "Free Will Theorem" (Notices AMS 56, pp. 226-232, 2009), whose mathematical content is largely contained in the beautiful Kochen-Specker Theorem from 1967. The conceptual background to both theorems is not to be found in the debate on Free Will, but in the Einstein-Bohr debate (1927-1949) on such matters as determinism and the existence of Reality. This debate became a philosophical muddle, and it is gratifying that mathematics has now brought it to an end. This talk provides an introduction to these matters, including an improved and unified presentation of the Kochen-Specker and Conway-Kochen Theorems. No background except linear algebra and elementary logic is required."
http://www.win.tue.nl/emacs/abstracts/landsman110525.pdf

The Conway-Kochen Free Will Theorem'
http://philosophyfaculty.ucsd.edu/faculty/wuthrich/PhilPhys/MenonTarun2009Man_FreeWillThm.pdf

Conway's Proof Of The Free Will Theorem

We must believe in free will. We have no choice.
-- Isaac B. Singer
http://www.cs.auckland.ac.nz/~jas/one/freewill-theorem.html

THE CONWAY-KOCHEN ‘FREE WILL THEOREM’ AND UNSCIENTIFIC DETERMINISM
http://users.tpg.com.au/raeda/website/theorem.htm

The Kochen-Specker theorem is an important, but subtle, topic in the foundations of quantum mechanics (QM). The theorem provides a powerful argument against the possibility of interpreting QM in terms of hidden variables (HV).
http://plato.stanford.edu/entries/kochen-specker/

"Heinz Von Forester characterizes the objects “known” by an autopoietic system as eigen-solutions, that is, as discrete, separable, stable and composable states of the interaction of the system with its environment. Previous articles have presented the FBST, Full Bayesian Significance Test, as a mathematical formalism specifically designed to access the support for sharp statistical hypotheses, and have shown that these hypotheses correspond, from a constructivist perspective, to systemic
eigen-solutions in the practice of science. In this article several issues related to the role played by language in the emergence of eigen-solutions are analyzed. The last sections also explore possible connections with the semiotic theory of Charles Sanders Peirce.

“If the string is too tight it will snap, but if it is too loose it will not play.”
-Siddhartha Gautama

“The most beautiful thing we can experience is the mysterious. It is the source of all true art and all science. He to whom this emotion is a stranger, who can no longer pause to wonder and stand rapt in awe, is as good as dead: His eyes are closed.” -Albert Einstein

In Stern (2007a) it is shown how the eigen-solutions found in the practice of science are naturally represented by statistical sharp hypotheses. Statistical sharp hypotheses are routinely stated as natural laws, conservation principles or invariant transforms, and most often take the form of functional equations, like h(x)= c.

The article also discusses why the eigen-solutions’ essential properties of discreteness (sharpness), stability, and composability, indicate that considering such hypotheses in the practice of science is natural and reasonable. Surprisingly, the two standard statistical theories for testing hypotheses, classical (frequentist p-values) and orthodox Bayesian (Bayes factors), have well known and documented problems for handling or interpreting sharp hypotheses. These problems are thoroughly reviewed, from statistical, methodological, systemic and epistemological perspectives.

The same article presents the FBST, or Full Bayesian Significance Test, an
unorthodox Bayesian significance test specifically designed for this task. The
mathematical and statistical properties of the FBST are carefully analyzed. In
particular, it is shown how the FBST fully supports the test and identification of eigensolutions in the practice of science, using procedures that take into account all the essential characteristics pointed by Von Forester. In contrast to some alternative belief calculi or logical formalisms based on discrete algebraic structures, the FBST is based on continuous statistical models. This makes it easy to support concepts like sharp hypotheses, asymptotic convergence and stability, and these are essential concepts in the representation of eigen-solutions. The same article presents cognitive constructivism as a coherent epistemological framework that is compatible with the FBST formalism, and vice-versa. We will refer to this setting as the Cognitive Constructivism plus FBST formalism, or CogCon+FBST framework for short."
http://www.ime.usp.br/~jstern/papers/papersJS/jmschk2.pdf

Biological evolution — a semiotically constrained growth of complexity

Any living system possesses internal embedded description and exists as a superposition of different potential realisations, which are reduced in interaction with the environment. This reduction cannot be recursively deduced from the state in time present, it includes unpredictable choice and needs to be modelled also from the state in time future. Such non-recursive establishment of emerging configuration, after its memorisation via formation of reflective loop (sign-creating activity), becomes the inherited recursive action. It leads to increase of complexity of the embedded description, which constitutes the rules of generative grammar defining possible directions of open evolutionary process. The states in time future can be estimated from the point of their perfection, which represents the final cause in the Aristotelian sense and may possess a selective advantage. The limits of unfolding of the reflective process, such as the golden ratio and the golden wurf are considered as the canons of perfection established in the evolutionary process.
...
For the description of observable world, which consists of the systems perceiving both outer objects and an inner self, an apparatus of the set theory was applied (Bounias, Bonaly 1997a). A special type of sets (closed sets) exists upon intersection of topological spaces owning different dimensions. This intersection will incorporate a contradiction (fixed point) in the description. Fixed points will generate internal choice accounting for the biological self. This description provides theoretical justification for the existence of memory. The closed sets in this approach are similar to the monads of Leibniz (1965), which constitute and observe the Universe. The empty set will correspond to a vacuum that is still not allotted by features (Bounias, Bonaly 1997b). The memory appears as a ‘sign-creating activity’ (Hegel 1971), linking sets with different dimensions.

A concept with emphasising the fixed point as a central element of the contradictory structure, uniting parts and a whole, was applied to biological systems by Gunji et al. (1996, 1999). Following this approach, an uncertainty in interaction between biosystem and environment is reduced via formation of a self-reflective loop, which leads to establishment of emergent computation such as primitive recursive functions. Time in this approach separates contradictory statements allowing them to appear in a sequential order. In this model, all interactions encompass the notion of detection. The latter can be expressed as a process generating a contradiction. The process of internal choice in the course of adaptation includes inducing a fixed point and addressing a fixed point. It can be compared to indicating an element with indicating a set consisting of elements, that is, to Russel’s paradox. Evolution as a formation of reflective loops during measurement is generally relevant to resolving a paradox or a logical jump.
...
Dubois (1997) introduced a concept of the incursive computation, in the sense that an automaton is computed at the future time t+1 as a function of its neighbour automata at the present and/or past time steps but also at the future time t+1. The development of this concept for inclusion of multiple states led to the concept of hyperincursion, which is an incursion when several values can be generated at each time step. The series of incursive and hyperincursive actions will produce fractal patterns defined by functions of the past, the present as well as the future states. External incursive inputs cannot be transformed to a recursion. But they can be internalised and thus transformed to recursive inputs via self-reference (as being memorised in the system as signs). Interference of inputs in fractal generation gives rise to various fractal patterns with different scaling symmetries. These patterns have however some fundamental symmetrical rules at different scales, corresponding to potential existence of certain canons in incursive computation. Hyperincursion means superimposition of states similar to that in quantum computation (Dubois 1998). In incursive and hyperincursive fields (which are viewed as hypersets, i.e. sets including themselves), indecidabilities and contradictions occur (in Gödelian sense): the fractal machine operates in a non-algorithmic way and the formal system cannot explain all about itself (indecidability). The transformation of a non-local incursive system to a local recursive system leads to a folding of each automaton to the other ones from the future time to the present time. We will show later that the internal evolutionary process can be modelled as a function of the system’s state at time past, present and future with fundamental consequences for biological perfection.
...
Interaction between the whole and the parts can be viewed as an intersection of the sets with different dimensions forming a contra-diction in the sense of Russel’s paradox (the fixed point) (Bounias, Bonaly 1997a). This intersection may represent a harmony or a disharmony, depending on how parts are observed within a whole observing it. A harmony appears as a threshold for establishing a connection between local and global periods of iteration in recursive embedding (Mignosi et al. 1998). When viewed as a recursion (reflected from incursion), the preceding motif unit is transferred into the subsequent one by a certain fixed similarity transformation g: Sk+1 = g(Sk). The resulting domains (having certain quantitative values) are hierarchically embedded into one another and function at every level with different clock time periods (Petukhov 1989). The limit of actualisation fits optimality of the structure being actualised thus it provides the existence of most optimal solutions for design."
http://www.mun.ca/biology/igamberdiev/SignSystemStudies2002.pdf

The Future of Life of Earth: Ecosystems as Topological Spaces
...
As a corrollary of the Bolzano, Weierstrass, and Heine-Borel-Lebesgue theorems, it infers that a future for Life on Planet Earth lies on two major conditions: (i) maximization of complementarity in habitat occupation and resource utilization, and (ii) reciprocal contribution of subsystems in filling each other with components needed for improving their respective functions and structures, i.e. mutualism. Objective foundations for assessing features of a further "universal right" are derived from the need for all livings to fulfill the biological conditions necessary for life to be sustainably possible. In conclusion, should natural conditions hit some difficulties on the way to these accomplishments, it would be the honor of humankind to contribute to improving, rather than impairing, the trends of evolution towards the goals of reaching "optimum states" in which competition is minimized and mutualism is generalized.
http://books.google.com/books?id=V7zhUBg1qesC&lpg=PA206&ots=kgL0cozrCF&lr&pg=PA206#v=onepage&q&f=false

"Mutualism is the way two organisms biologically interact where each individual derives a fitness benefit (i.e. increased reproductive output). Similar interactions within a species are known as co-operation. It can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the expense of the other. Mutualism and symbiosis are sometimes used as if they are synonymous, but this is strictly incorrect: symbiosis is a broad category, defined to include relationships which are mutualistic, parasitic or commensal. Mutualism is only one type."
http://en.wikipedia.org/wiki/Mutualism_(biology)

attractor - A quantum of information (e.g., an idea, thought, vision) capable of capturing a system's "attention" and temporarily drawing that system into an iterative pattern of behavior the complexity of which is determined by its type: A fixed point attractor produces repetitive behavior; a l imit cycle attractor results in a somewhat more complex but still limited range of action; the behavior resulting from a torus attractor when graphed, takes the form of a standard "bell curve"; a strange attractor produces highly complex but nonetheless patterned behavior; while the chaotic attractor leads to just that in the common sense of the term.

attractor basin - A behavioral trough into which a system "falls" and is held to the attractor over a period of time. While the basins of the fixed point, limit cycle, and torus attractors tend to be the deepest and most difficult to escape, the strange attractor basin is much shallower.

autopoiesis - From the Greek roots - auto (self-) and poiesis (creation or production), a term referring to a chaordic system's capacity for self-organization , self-reference , and self-iteration ; an autopoietic system is an autonomous and self-maintaining unity the components of which, through their interaction, recursively generate dynamical processes self-similar to the ones that produced them with no apparent inputs and outputs.

bifurcation - The point at which a system's behavior changes abruptly from one mode to another. When water for instance, crosses the boundary separating its liquid state from the solid state we call "ice," it is said to have bifurcated. In higher order systems such as business enterprises, bifurcation points are known colloquially as windows of opportunity .

Butterfly Effect - A metaphor suggested by meteorologist Edward Lorenz to explain a system's sensitive dependency on initial conditions, to wit the ability of a butterfly flapping its wings in Hong Kong thus producing a perturbation in its local environment which then amplifies over time and across space until it results in an unexpected change in the weather in London. Based on this construct, the claim that the butterfly "caused" the shift in weather is literally accurate.

chaos theory - A mathematical formulation based on the 20 th century discovery that chaos and order are not opposites from which to choose, but two interpenetrated and complementary aspects of a singular reality. The chaos theory has succeeded in describing and explaining the behavior of complex, dynamical, non-linear, far-from-equilibrium systems and proves them to be the rule and not its exception throughout the universe.

chaos - A condition of disarray, discord, confusion, upheaval, bedlam, and utter mess arising from the complete absence of order. Note: The common connotation of the term is provided here merely to distinguish it from "capital-C" Chaos , the emerging metaphysics that demonstrates for once and for all that the complete absence of order is both a logical and a factual impossibility.

Chaos - The metaphysics which continues to emerge from the triad of primary discoveries of 20 th century science: quantum mechanics , relativity, and the chaos theory ; shorthand for the science of complex, dynamical, non-linear, far-from-equilibrium systems .

chaordic system - A complex, dynamical, non-linear, far-from-equilibrium system the behavior of which is simultaneously cha otic (unpredictable) and ord erly (deterministic).

complexity - A general property of a chaordic system that arises out of the complicated arrangement of a myriad of interacting components. Since each of these components derives its meaning from the context of the whole system, once "removed" from that context, it instantly lose that meaning. Note: Although you may have heard mention of complexity theory or complexity science , in fact there are no such things.

Connectivity - The Chaos principle that refers to the non-local relatedness of every component of a chaordic system with each and every other component at some positive value of entanglement; that is, although an absolute connection exists between any two particles, the relative strength of that connection vis-à-vis any other pair of components is dependent on the extent of their interactivity.

Consciousness - The Chaos principle that attests to the universal primacy of Mind in the non-ordinary sense of that term. Consciousness is deemed to be the groundstate, the fundamental source, and the essence of all that exists ranging from mental objects, e.g., thoughts, theories, beliefs, emotions, etc., to physical "things" such as rocks and refrigerators.

dialogue - From the Greek words dia (through or among) and logos (meaning), a mode of communication in which meaning flows through and among conversants considered as a whole. This form of interaction is best understood in contrast to the more commonplace discussion , a term derived from the Latin discutere (to break into pieces).

Dissipation - The Chaos principle referring to the general tendency of a chaordic system to "fall apart" structurally when approximating its Limits to Growth . Note: When dissipation is intentional (chosen when facing the window of opportunity ), the chances are high albeit not guaranteed that the system will succeed in maintaining its informational content and organizational integrity.

Determinism - A doctrine underpinning the conventional model of reality which holds that the subsequent state of a system is derived linearly from its prior state in a predictable chain of cause and effect, thus perfect knowledge of previous states implies perfect knowledge of the system's future.

Dynamical - Possessing, exerting, or displaying energy; disposed to motion or action although not necessarily rhythmic motion displaying regularity.

edge of chaos - A term referring to the figurative "border" between equilibrium or near-to-equilibrium conditions (ranging from no change to linear change) and chaos where it is believed that chaordic systems are most flexible and evolvable; the far-from-equilibrium state to which systems "evolve to evolve."

Emergence - The Chaos principle that refers to the process through which a chaordic system rises to higher-orders of complexity revealing properties and behaviors that originate from the collective dynamics of the system, yet are neither found in nor are directly deducible from its components.

entanglement - The tendency of two or more particles having once interacted to behave in correlation with each other even though they may be separated spatially;

equilibrium (equilibrial) - The ultimate stasis predicted by the Second Law of Thermodynamics mandating that the energy of a system when left to itself will eventually spreads out towards a state of structure-less uniformity known as maximum fatal chaos ; scientific jargon for death.

far-from-equilibrium (FFE) - The inverse of equilibrium or near-to-equilibrium ; a term describing the conditions approximating those of the strange attractor basin in which chaordic systems achieve optimal sustainability; also known as the " edge of chaos"

fracta l - A geometric object or pattern whose spatial form is irregular (nowhere smooth), self-similar (its irregularity repeats itself across many scales), and iterative to ever-higher orders of complexity.for example, a chaordic system .

Indeterminacy - The Chaos principle which posits that the future of a chaordic system is not determined or fixed in advance (although it is sensitively dependent on initial conditions).

holarchy - A term coined by philosopher Arthur Koestler denoting a hierarchy of holons comprised by multiple levels of increasing wholeness, each of which includes all lower levels.

holon - A term derived from the Greek holos for "whole" but denoting a whole/part - an entity that is a whole in itself and simultaneously, a part of a larger whole: a whole atom is part of a whole molecule which is part of a whole cell which is part of a whole organism, and so on indefinitely.

Interventionism - A premise underlying conventional notions of reality which holds that a system when left to itself, will tend toward disorder (chaos), and therefore, requires the action of an external agent to maintain the system in a steady state of equilibrium via the imposition of order and control.

Law of Limits - A corollary to the Second Law of Thermodynamics which applies to chaordic systems (as opposed to closed ones) mandating that none can grow indefinitely given that over time, the conditions that once fostered growth change. Thus all such systems eventually reach their "limits" -- a bifurcation point also known as a window of opportunity.

Materialism - A philosophy underpinning the conventional model of reality which asserts that only that which is accessible to the human senses or their instrumental extensions can be regarded as "real." Everything else is to be regarded as epiphenomonal, e.g., thoughts, ideas, conscious experience, etc; a secondary phenomenon that results from the interaction of matter.

matter - A physical body or substance that occupies space and can be perceived by one or more senses.

Mechanism - A philosophy which holds that the universe and all its phenomena are best understood as completely mechanical systems - that is, systems composed entirely of matter in motion and therefore, governed by mechanical laws. Note: This axiom was captured in the 17 th century metaphor attributed to French philosopher René Descartes who suggested that ours is a "clockwork universe."

non-linear (system) - A system whose behavior cannot be explained or predicted by linear or first-order equations, but are governed instead by complexity of reciprocal relationships or feedback loops;

non-locality - A term referring to the ability of one particle (or particular form) to "communicate" instantaneously with another even though separated spatially in apparent violation of Einstein's limitation on the speed of signal exchange (light); also known as action-at-a-distance .

orgmind - An amalgamation of the terms "organization" and "mind" coined by Laurie A Fitzgerald to point to the integrated collective consciousness governing the behavior both in whole and on the part of individual members of an organization.

perturbation - A small change, variation, or disturbance in the regular motion, course, arrangement, or state of a chaordic system the quantity of which increases as the system moves away from the equilibrium state and reaching a crescendo in far-from-equilibrium conditions.

quantum - A quantity or amount; the smallest discrete quantity that can exist as a unit; a "packet" of energy or matter subject to the principles of quantum physics.

Quantum physics - The science of the quantum (sub-atomic) world which along with Einstein's general relativity and the chaos theory serve as the pillars of the so-called "new science" underpinning the metaphysics known as Chaos .

Quantum Wave Function - The QWF (pronounced quiff ) is the equation formulated by one of the architects of quantum physics , one Erwin Schröedinger who proved the wavelike nature of an object prior to its manifestation in particular form; that is, as pure potential wholly lacking in substance, mass, or dimensions of any kind.

Reductionism - A premise underlying conventional notions of reality that asserts that complex phenomena are best understood by reducing them to their component parts and then grasping the nature of those parts which are regarded as independent units.

Relativity - The geometrical theory of gravitation authored by Einstein in 1915 based on the precept that there is no such thing as a one and only, correct and absolute viewing point since every perspective is relative to every other; thus, laws of nature must be such that they fit all frames of reference and must have the same validity and form in all reference points; along with Quantum physics and the chaos theory, one of the fundamental pillars of the "new science."

self-iteration - The inherent ability of a chaordic system to reproduce itself in a series of self-similar albeit increasingly complex fractal forms via the repetition of intrinsic algorithmic "instructions."

self-organization - The inherent ability of a chaordic system to form higher-order structures or functions without the need for the intervention of an external agent; another term for evolution with minimal input from the environment.

self-reference - The inherent ability of chaordic system to refer to its own interiority for all the "instructions" it will ever need to do what it does and be what it is.

sensitive dependency (on initial conditions) - The tendency of a chaordic system to change suddenly and dramatically as a consequence of tiny perturbations to which it is highly sensitive; known whimsically as the butterfly effect , an allusion to the fact that a highly complex system is susceptible to the most simple and seemingly insignificant disturbance in its starting conditions. Practically speaking, sensitive dependency renders the long-term future of a system both deterministic and unpredictable.

sustainability - The measure of the capacity of a chaordic system to maintain the continuity of its core identity, internal organization, and structural integrity while at the same time undergoing dissipation .

synchronicity - A term coined by Carl Jung to describe "temporally coincident occurrences of acausal events" or more informally, meaningful coincidents; n on-random, synchronous incidents of meaningfully related events (as similar thoughts in widely separated persons or a mental image of an unexpected event before it happens) that cannot be explained by conventional mechanisms of causality. Note: non-locality is greater between entangled or coupled agents.

Uncertainty Principle - A fundamental precept from quantum physics established by Werner Heisenberg which holds that one cannot measure the values of certain conjugate qualities (i.e., location and momentum) with precision; increases in the accuracy of measuring one quality of an elementary particle inevitably results in a decrease in the measurement accuracy of the other.

wave/particle duality - A term referring to the fact that light (and potentially other phenomena as well) manifests simultaneously in both wave-like (spread out all over space) and particular (localized in discrete packets of energy) forms; also known by the whimsical nickname wavicle.
http://www.orgmind.com/chaosspeak.php

Reasons as Causes in Bayesian Epistemology
http://www.hss.cmu.edu/philosophy/glymour-pdfs/ReasonsasCauses.pdf

"Probability theory is increasingly important to philosophy. Bayesian probabilistic models offer us ways of getting to grips with fundamental problems about information, coherence, reliability, confirmation, and testimony, and thus show how we can justify beliefs and evaluate theories. Bovens and Hartmann provide a systematic guide to the use of probabilistic methods not just in epistemology, but also in philosophy of science, voting theory, jurisprudence, and cognitive psychology."
http://books.google.com/books?id=geg-rX_qICIC&dq=bayesian+epistemology&source=gbs_navlinks_s

Recursive Definitions, Fixed Points and the Y Combinator
http://www.cs.utexas.edu/~lavender/courses/CS395T/dsl/lectures/YCombinator.pdf

The Why of Y

"Did you ever wonder how Y works and how anyone could ever have thought of it? In this note I’ll try to explain to you not only how it works, but how someone could have invented it. I’ll use Scheme notation because it is easier to understand when functions passed as arguments are being applied. The point of Y is to provide a mechanism to write self-referential programs without a special built-in means of accomplishing it. In Scheme there are several mechanisms for writing such programs, including global function definitions and letrec."
http://www.dreamsongs.com/NewFiles/WhyOfY.pdf

9.8 Paradoxical Combinator:
http://tinyurl.com/4w3t3hw

Indirect Self-Reference
http://en.wikipedia.org/wiki/Indirect_self-reference

"A quine is a computer program which produces a copy of its own source code as its only output. The standard terms for these programs in the computability theory and computer science literature are self-replicating programs, self-reproducing programs, and self-copying programs.

A quine is a fixed point of an execution environment, when the execution environment is viewed as a function. Quines are possible in any programming language that has the ability to output any computable string, as a direct consequence of Kleene's recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language.

The name "quine" was coined by Douglas Hofstadter in his popular science book Gödel, Escher, Bach: An Eternal Golden Braid in the honor of philosopher Willard Van Orman Quine (1908–2000), who made an extensive study of indirect self-reference, and in particular for the following paradox-producing expression, known as Quine's paradox:

"Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation.

The name has become popular between amateur enthusiasts and programming hobbyists although it is not adopted by computer scientists.

A quine takes no input. Allowing input would permit the source code to be fed to the program via the keyboard, opening the source file of the program, and similar mechanisms.

In some languages, an empty source file is a fixed point of the language, producing no output. Such an empty program, submitted as "the world's smallest self reproducing program", once won the "worst abuse of the rules" prize in the Obfuscated C contest."
http://en.wikipedia.org/wiki/Quine_(computing)

The Principle of Hology (Self-composition)

Hology, a logical analogue of holography characterizing the most general relationship between reality and its contents, is a form of self-similarity whereby the overall structure of the universe is everywhere distributed within it as accepting and transductive syntax, resulting in a homogeneous syntactic medium. That is, because reality requires a syntax consisting of general laws of structure and evolution, and there is nothing but reality itself to serve this purpose, reality comprises its own self-distributed syntax under MU (which characterizes the overall relationship between syntax and content).

The hology property itself is distributed over reality. That is, the informational boundary of a coherent object (syntactic operator) is hologically multiplexed with respect to state (attribute and value) in order to define the descriptive interior of the operator as it participates in global self-processing without input. This multiplexing of possibilities is just the replication of the structure of the boundary over the interior of the boundary as a function of time. Again, the operator ultimately has nothing else in terms of which to express its spatiotemporal capacity.

Hology is implied by MAP because reality is closed under composition and attribution; it is implied by M=R because reality is composed of syntactic operators with generalized mental or cognitive functionality; and it is implied by syndiffeonesis and MU because it is an expression of the relationship between the global spatiotemporal medium and its contents.
...
The Extended Superposition Principle

In quantum mechanics, the principle of superposition of dynamical states asserts that the possible dynamical states of a quantized system, like waves in general, can be linearly superposed, and that each dynamical state can thus be represented by a vector belonging to an abstract vector space. The superposition principle permits the definition of so-called “mixed states” consisting of many possible “pure states”, or definite sets of values of state-parameters. In such a superposition, state-parameters can simultaneously have many values.

The superposition principle highlights certain problems with quantum mechanics. One problem is that quantum mechanics lacks a cogent model in which to interpret things like “mixed states” (waves alone are not sufficient). Another problem is that according to the uncertainty principle, the last states of a pair of interacting particles are generally insufficient to fully determine their next states. This, of course, raises a question: how are their next states actually determined? What is the source of the extra tie-breaking measure of determinacy required to select their next events (“collapse their wave functions”)?

The answer is not, as some might suppose, “randomness”; randomness amounts to acausality, or alternatively, to informational incompressibility with respect to any distributed causal template or ingredient of causal syntax. Thus, it is either no explanation at all, or it implies the existence of a “cause” exceeding the representative capacity of distributed laws of causality. But the former is both absurd and unscientific, and the latter requires that some explicit allowance be made for higher orders of causation…more of an allowance than may readily be discerned in a simple, magical invocation of “randomness”.

The superposition principle, like other aspects of quantum mechanics, is based on the assumption of physical Markovianism. (A Markoff process is a stochastic process with no memory. That is, it is a process meeting two criteria: (1) state transitions are constrained or influenced by the present state, but not by the particular sequence of steps leading to the present state; (2) state transition contains an element of chance. Physical processes are generally assumed to meet these criteria; the laws of physics are defined in accordance with 1, and because they ultimately function on the quantum level but do not fully determine quantum state transitions, an element of chance is superficially present. It is in this sense that the distributed laws of physics may be referred to as “Markovian”. However, criterion 2 opens the possibility that hidden influences may be active.) It refers to mixed states between adjacent events, ignoring the possibility of nonrandom temporally-extensive relationships not wholly attributable to distributed laws. By putting temporally remote events in extended descriptive contact with each other, the Extended Superposition Principle enables coherent cross-temporal telic feedback and thus plays a necessary role in cosmic self-configuration. Among the higher-order determinant relationships in which events and objects can thus be implicated are utile state-syntax relationships called telons, telic attractors capable of guiding cosmic and biological evolution.

Given that quantum theory does not seem irrevocably attached to Markovianism, why has the possibility of higher-order causal relationships not been seriously entertained? One reason is spacetime geometry, which appears to confine objects to one-dimensional “worldlines” in which their state-transition events are separated by intervening segments that prevent them from “mixing” in any globally meaningful way. It is for this reason that superposition is usually applied only to individual state transitions, at least by those subscribing to conservative interpretations of quantum mechanics.

Conspansive duality, which incorporates TD and CF components, removes this restriction by placing state transition events in direct descriptive contact. Because the geometric intervals between events are generated and selected by descriptive processing, they no longer have separative force. Yet, since worldlines accurately reflect the distributed laws in terms of which state transitions are expressed, they are not reduced to the status of interpolated artifacts with no dynamical reality; their separative qualities are merely overridden by the state-syntax dynamic of their conspansive dual representation.

In extending the superposition concept to include nontrivial higher-order relationships, the Extended Superposition Principle opens the door to meaning and design. Because it also supports distribution relationships among states, events and syntactic strata, it makes cosmogony a distributed, coherent, ongoing event rather than a spent and discarded moment from the ancient history of the cosmos. Indeed, the usual justification for observer participation – that an observer in the present can perceptually collapse the wave functions of ancient (photon-emission) events – can be regarded as a consequence of this logical relationship.

Supertautology

Truth, a predicate representing inclusion in a domain, is the logical property by virtue of which one thing may be identified and distinguished from another at any level of resolution. All theories aim at truth, and reality theory is no exception. With respect to science, there is a problem with truth: beyond the level of direct observation, it cannot be certified by empirical means. To blame are various forms of uncertainty, model-theoretic ambiguity, and the problem of induction: scientific generalizations are circular insofar as they are necessarily based on the assumption that nature is uniform. The problem of induction effectively limits certitude to mathematical reasoning.

This is hardly surprising, for truth is ultimately a mathematical concept. In logic, truth is defined by means of always-true expressions called tautologies. A logical tautology possess three distinctive properties: it is descriptively universal, it is closed under recursive self-composition, and it is internally and externally consistent on the syntactic and semantic levels of reference. Since logic is the theory of truth, the way to construct a fully verifiable theory is to start with logic and develop the theory by means of rules or principles under which truth is heritable. Because truth is synonymous with logical tautology, this means developing the theory by adjoining rules which themselves have a tautological structure - i.e., which are universal, closed and consistent - and logically extracting the implications. A theory of reality constructed in this way is called a supertautology.
In a supertautological theory of reality, it is unnecessary to assume the uniformity of nature with respect to certain kinds of generalization. Instead, such generalizations can be mathematically deduced…e.g. nomological covariance, the invariance of the rate of global self-processing (c-invariance), and the internally-apparent accelerating expansion of the system."
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf

Varieties of self-reference

"The significance of any system of explicit representation depends not only on the immediate properties of its representational structures, but also on two aspects of the attendant circumstances: implicit relations among, and processes defined over, those individual representations, and larger circumstances in the world in which the whole representational system is embedded. This relativity of representation to circumstance facilitates local inference, and enables representation to connect with action, but it also limits expressive power, blocks generalisation, and inhibits communication. Thus there seems to be an inherent tension between the effectiveness of located action and the detachment of general-purpose reasoning.

It is argued that various mechanisms of causally-connected self-reference enable a system to transcend the apparent tension, and partially escape the confines of circumstantial relativity. As well as examining self-reference in general, the paper shows how a variety of particular self-referential mechanisms --- autonymy, introspection, and reflection --- provide the means to overcome specific kinds of implicit relativity. These mechanisms are based on distinct notions of self: self as unity, self as complex system, self as independent agent. Their power derives from their ability to render explicit what would otherwise be implicit, and implicit what would otherwise be explicit, all the while maintaining causal connection between the two. Without this causal connection, a system would either be inexorably parochial, or else remain entirely disconnected from its subject matter. When appropriately connected, however, a self-referential system can move plastically back and forth between local effectiveness and detached generality."
http://portal.acm.org/citation.cfm?id=1029789

Does the Mind have a Place in Mathematics?
...
"R. Penrose writes: It is in mathematics that our thinking processes have their purest form. If thinking is just carryingout a computation of some kind, then it might seem that we ought to be able to see this most clearly in our mathematical thinking. Yet, remarkably, the very reverse turns out to be the case. It is within mathematics that we find the clearest evidence that there must actually be something in our conscious thought processes that eludes computation. This may seem to be a paradox - but it will be of prime importance in the arguments which follow, that we come to terms with it."
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.871&rep=rep1&type=pdf

"Throughout his book, and especially in chapter I.11, Mac Lane informally discusses how mathematics is grounded in more ordinary concrete and abstract human activities. This section sets out a summary of his views on the human grounding of mathematics.

Mac Lane is noted for co-founding the field of category theory, which enables a far-reaching, unified treatment of mathematical structures and relationships between them at the cost of breaking away from their cognitive grounding. Nonetheless, his views—however informal—are a valuable contribution to the philosophy and anthropology of mathematics[1] which anticipates, in some respects, the much richer and more detailed account of the cognitive basis of mathematics given by George Lakoff and Rafael E. Núñez in Where Mathematics Comes From. Lakoff and Núñez (2000) argue that mathematics emerges via conceptual metaphors grounded in the human body, its motion through space and time, and in human sense perceptions."
http://en.wikipedia.org/wiki/From_Action_to_Mathematics_per_Mac_Lane

Vicious circles: On the mathematics of non-wellfounded phenomena (Book Review)
http://www.ams.org/journals/bull/1998-35-01/S0273-0979-98-00735-6/S0273-0979-98-00735-6.pdf

Chaotic Logic Language, Thought and Reality From the Perspective of Complex Systems Science
http://www.goertzel.org/books/logic/contents.html

Hyperset Models of Self, Will and Reflective Consciousness

A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized
to occur as patterns within the "moving bubble of attention" of the human brain and any brainlike AI system. They appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well.
http://goertzel.org/consciousness/consciousness_paper.pdf

Autonomy and Hypersets
...
"According to Rosen (1991, 2000), a central feature of living things is complexity. Rosen, it should be noted, has an idiosyncratic understanding of complexity. First, according to Rosen, whether a system is complex depends only secondarily on the system itself; it depends primarily on the system’s models. A physical system is simple if and only if its behavior can be captured by a model that is Turing computable; otherwise, the system is complex. So, for Rosen, a system is complex just in case its models are non-computable. This connects in vital ways with the above discussion of impredicative definitions and Russell’s theory of types. Put most simply, Russell introduced the theory of types and Poincaré outlawed impredicative definitions specifically to keep logic, set theory and mathematics well-founded or, put anachronistically, Turing-computable. Thus systems whose models disobey the theory of types or contain impredicativities—models, that is, that are not well-founded—are complex in Rosen’s sense. If this is so, we can use hyperset diagrams as a means of modeling systems that are complex. Indeed, we can use hyperset diagrams of models as tests to determine whether those models are complex."
http://www.ehu.es/ias-research/autonomy/doc/chemero_draft.pdf

"The 20th century saw many theories concerned with the “real” nature of living systems. They include, more or less in chronological order, the Osmotic Forest of Stephane Leduc, the General System Theory of Von Bertalanffy, Cybernetics by Norbert Wiener, the Self-reproducing automata of Von Neuman, the notions of Hypercyles by Eigen and Schuster, and Autocatalytic sets by Kauffman, and the metaphor of information which identifies “life” with DNA. To this list we should add the Artificial Life movement, an active, eclectic and interesting field lacking a central conceptual core [2]. In this crowded field, two theories stand out for their focus on the circular organization of cellular metabolism. These two theories are: (M,R) systems created by Robert Rosen (1958) [12] and the notion of Autopoietic systems set forth by Maturana and Varela in the early 1970s [4]. theories reject the computer metaphor as valid to understand cellular organization or brain function
and do not claim that reproduction or evolvability are defining traits of living systems; instead they consider central the undeniable fact that metabolism produces metabolism, which produces metabolism (metabolic closure).

In this manuscript we first introduce the theory of (M,R) systems. Second, we show how, from this theory, it is possible to find self-referential objects defined by the puzzling property of f(f) = f. Third, we introduce Autopoiesis, another theory centered on closure and we explain the relationship between these two theoretical views of living systems. Fourth, we explain why metabolic closure is an idea that must be incorporated in current theoretical models, especially in the new field of Systems Biology. Finally we speculate on the directions that mathematical viewpoints must follow to apply the idea of closure to cellular dynamics."
http://pleiad.dcc.uchile.cl/_media/bic2007/papers/algeibraicbiology2005.pdf

Some Reflections on Rosen's Conceptions of Semantics and Finality

"Rosen's most impressive achievement is his description of the living organization as one in which all relations of effficient causation are produced internally. Unlike a machine, there is no way to decompose an organism into disjoint physical parts that match with the system's functional units. Rather, in a living system there is an entanglement of processes and processes that regulate other processes, such that a single physical process may function at various logical levels simultaneously.

Such entanglement, when described in formal terms, boils down to an impredicativity: there is eventually no way to keep a distinct track of logical levels. That is: any hierarchical relation between these levels is merely local. More globally seen, these logical levels are not arranged hierarchically but circularly. Thus, an arrangement is envisaged between component parts, yielding a closed loop of mutual specifications«1»."(The idea of merely local hierarchy has been visualized by M.C.
Escher's drawing of infinite staircases, entitled: 'Ascending, descending'.
The idea of mutual specification can be found in Escher's image 'Drawing
hands', in which two drawn hands can be seen each to hold a pencil with
which the other hand is being drawn concurrently.)
http://www.personeel.unimaas.nl/arno.goudsmit/papers/rosenspecial.pdf

Modeling Uncertain Self-Referential Semantics with Infinite-Order Probabilities
http://www.goertzel.org/papers/InfiniteOrderProbabilities.pdf

Reference to abstract objects in discourse

"Reference to Abstract Objects in Discourse presents a novel framework and analysis of the ways we refer to abstract objects in natural language discourse. The book begins with a typology of abstract objects and related entities like eventualities. After an introduction to bottom up, compositional' discourse representation theory (DRT) and to previous work on abstract objects in DRT (notably work on the semantics of the attitudes), the book turns to a semantic analysis of eventuality and abstract object denoting nominals in English. The book then substantially revises and extends the dynamic semantic framework of DRT to develop an analysis of anaphoric reference to abstract objects and eventualities that exploits discourse structure and the discourse relations that obtain between elements of the structure. A dynamic, semantically based theory of discourse structure (SDRT) is proposed, along with many illustrative examples. Two further chapters then provide the analysis of anaphoric reference to propositions VP ellipsis. The abstract entity anaphoric antecedents are elements of the discourse structures that SDRT develops. The final chapter discusses some logical and philosophical difficulties for a semantic analysis of reference to abstract objects. For semanticists, philosophers of language, computer scientists interested in natural language applications and discourse, philosophical logicians, graduate students in linguistics, philosophy, cognitive science and artificial intelligence."
http://books.google.com/books?id=Z1HhRie5K_EC&dq=intensional+semantics+self-reference&source=gbs_navlinks_s

"In recent years, the much larger extent of the problems to do with self-reference has, in this way, become increasingly apparent.

Asher and Kamp sum up (Asher and Kamp 1989, p87):

"Thomason argues that the results of Montague (1963) apply not only to theories in which attitudinal concepts, such as knowledge and belief, are treated as predicates of sentences, but also to “representational” theories of the attitudes, which analyse these concepts as relations to, or operations on (mental) representations. Such representational treatments of the attitudes have found many advocates; and it is probably true that some of their proponents have not been sufficiently alert to the pitfalls of self-reference even after those had been so clearly exposed in Montague (1963)… To such happy-go-lucky representationalists, Thomason (1980) is a stern warning of the obstacles that a precise elaboration of their proposals would encounter."

Thomason mentions specifically Fodor’s “Language of Thought” in his work; Asher and Kamp themselves show that modes of argument similar to Thomason’s can be used even to show that Montague’s Intensional Semantics has the same problems. Asher and Kamp go on to explain the general method which achieves these results (Asher and Kamp 1989, p87):

"Thomason’s argument is, at least on the face of it, straightforward. He reasons as follows: Suppose that a certain attitude, say belief, is treated as a property of “proposition-like” objects – let us call them “representations” – which are built up from atomic constituents in much the way that sentences are. Then, with enough arithmetic at our disposal, we can associate a Gödel number with each such object and we can mimic the relevant structural properties of and relations between such objects by explicitly defined arithmetical predicates of their Gödel numbers. This Gödelisation of representations can then be exploited to derive a contradiction in ways familiar from the work of Gödel, Tarski and Montague."

The only ray of hope Asher and Kamp can offer is (Asher and Kamp 1989, p94): “Only the familiar systems of epistemic and doxastic logic, in which knowledge and belief are treated as sentential operators, and which do not treat propositions as objects of reference and quantification, seem solidly protected from this difficulty.” But see on these, for instance, Mackie 1973, p276f, although also Slater 1986.

Gödel’s famous theorems in this area are, of course, concerned with the notion of provability, and they show that if this notion is taken as a predicate of certain formulas, then in any standard formal system which has enough arithmetic to handle the Gödel numbers used to identify the formulas in the system, certain statements can be constructed which are true, but are not provable in the system, if it is consistent. What is also true, and even provable in such a system is that, if it is consistent then (a) a certain specific self-referential formula is not provable in the system, and (b) the consistency of the system is not provable in the system. This means the consistency of the system cannot be proved in the system unless it is inconsistent, and it is commonly believed that the appropriate systems are consistent. But if they are consistent then this result shows they are incomplete, that is there are truths which they cannot prove."
http://www.iep.utm.edu/par-log/

"Bayesian probability is one of the different interpretations of the concept of probability and belongs to the category of evidential probabilities. The Bayesian interpretation of probability can be seen as an extension of logic that enables reasoning with uncertain statements. To evaluate the probability of a hypothesis, the Bayesian probabilist specifies some prior probability, which is then updated in the light of new relevant data. The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation. Bayesian probability interprets the concept of probability as "a measure of a state of knowledge", in contrast to interpreting it as a frequency or a "propensity" of some phenomenon.

"Bayesian" refers to the 18th century mathematician and theologian Thomas Bayes (1702–1761), who provided the first mathematical treatment of a non-trivial problem of Bayesian inference. Nevertheless, it was the French mathematician Pierre-Simon Laplace (1749–1827) who pioneered and popularized what is now called Bayesian probability.

Broadly speaking, there are two views on Bayesian probability that interpret the state of knowledge concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic. According to the subjectivist view, the state of knowledge measures a "personal belief". Many modern machine learning methods are based on objectivist Bayesian principles. In the Bayesian view, a probability is assigned to a hypothesis, whereas under the frequentist view, a hypothesis is typically tested without being assigned a probability."
http://www.digparty.com/wiki/Bayesian_probability

"It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules…."
https://telescoper.wordpress.com/tag/rudolf-carnap/

Bayesian Networks in Epistemology and Philosophy of Science:
http://stephanhartmann.org/Aberdeen_3.pdf

Bayesian Epistemology
http://www.stephanhartmann.org/HajekHartmann_BayesEpist.pdf

Conventional and Objective Invariance: Debs and Redhead on Symmetry

"Much work in the philosophy of science falls into one of two camps. On the one hand, we have the system builders. Often with the help of formal methods, these philosophers aim at a general account of theories and models, explanation and confirmation, scientific theory change, and so on. Here, examples from the sciences are at best illustrations of the philosophical claims in question. On the other hand, we find those philosophers who, dissatisfied by general accounts and often prompted by a close study of the history of science, present detailed accounts of very specific parts of science, without aiming at the development of a general account. If a general account of science is referred to at all, it is refuted (by a case study) or only used to describe a certain episode of scientific research.

Although both of these approaches have their merits and limitations, very few research projects span the whole range of philosophy of science, such that results from general accounts are non-trivially applied to case studies of specific sciences. It is a virtue of Debs and Redhead’s book that they have set out to do exactly that. The authors give a general model of scientific representation, analyse the relations of symmetry, objectivity, and convention in this model, and connect their results with three elaborate examples from physics.

This is a formidable task, especially given the relevance and difficulty of the topic. In our contribution, we try to identify some of the inevitable weaknesses in the first part of the book. This foundational section makes three main philosophical claims: (1) Scientific representation has both a formal and an intertwined social aspect; (2) symmetries introduce the need for conventional choice; (3) perspectival symmetry is a necessary and sufficient condition for objectivity, while symmetry simpliciter fails to be necessary.
...
"the ambiguity existing between inertial reference frames within special relativity. […] [O]ne may think of special relativity as proposing an idealised model O, of actual events in space and time. This is represented by a mathematical model, M, […] in the form of a manifold. It is central to special relativity that certain relations on this manifold are invariant under the Poincare´ group."
http://philpapers.org/s/S.%20Hartmann

The Theory of Theories
...
"The scientific and axiomatic methods are like mirror images of each other, but located in opposite domains. Just replace “observe” with “conceptualize” and “part of the world” with “class of mathematical objects”, and the analogy practically completes itself. Little wonder, then, that scientists and mathematicians often profess mutual respect. However, this conceals an imbalance. For while the activity of the mathematician is integral to the scientific method, that of the scientist is irrelevant to mathematics (except for the kind of scientist called a “computer scientist”, who plays the role of ambassador between the two realms). At least in principle, the mathematician is more necessary to science than the scientist is to mathematics.

As a philosopher might put it, the scientist and the mathematician work on opposite sides of the Cartesian divider between mental and physical reality. If the scientist stays on his own side of the divider and merely accepts what the mathematician chooses to throw across, the mathematician does just fine. On the other hand, if the mathematician does not throw across what the scientist needs, then the scientist is in trouble. Without the mathematician’s functions and equations from which to build scientific theories, the scientist would be confined to little more than taxonomy. As far as making quantitative predictions were concerned, he or she might as well be guessing the number of jellybeans in a candy jar.

From this, one might be tempted to theorize that the axiomatic method does not suffer from the same kind of inadequacy as does the scientific method…that it, and it alone, is sufficient to discover all of the abstract truths rightfully claimed as “mathematical”. But alas, that would be too convenient. In 1931, an Austrian mathematical logician named Kurt Gödel proved that there are true mathematical statements that cannot be proven by means of the axiomatic method. Such statements are called “undecidable”. Gödel’s finding rocked the intellectual world to such an extent that even today, mathematicians, scientists and philosophers alike are struggling to figure out how best to weave the loose thread of undecidability into the seamless fabric of reality.

To demonstrate the existence of undecidability, Gödel used a simple trick called self-reference. Consider the statement “this sentence is false.” It is easy to dress this statement up as a logical formula. Aside from being true or false, what else could such a formula say about itself? Could it pronounce itself, say, unprovable? Let’s try it: "This formula is unprovable". If the given formula is in fact unprovable, then it is true and therefore a theorem. Unfortunately, the axiomatic method cannot recognize it as such without a proof. On the other hand, suppose it is provable. Then it is self-apparently false (because its provability belies what it says of itself) and yet true (because provable without respect to content)! It seems that we still have the makings of a paradox…a statement that is "unprovably provable" and therefore absurd.

But what if we now introduce a distinction between levels of proof? For example, what if we define a metalanguage as a language used to talk about, analyze or prove things regarding statements in a lower-level object language, and call the base level of Gödel’s formula the "object" level and the higher (proof) level the "metalanguage" level? Now we have one of two things: a statement that can be metalinguistically proven to be linguistically unprovable, and thus recognized as a theorem conveying valuable information about the limitations of the object language, or a statement that cannot be metalinguistically proven to be linguistically unprovable, which, though uninformative, is at least no paradox. Voilà: self-reference without paradox! It turns out that "this formula is unprovable" can be translated into a generic example of an undecidable mathematical truth. Because the associated reasoning involves a metalanguage of mathematics, it is called “metamathematical”.

It would be bad enough if undecidability were the only thing inaccessible to the scientific and axiomatic methods together. But the problem does not end there. As we noted above, mathematical truth is only one of the things that the scientific method cannot touch. The others include not only rare and unpredictable phenomena that cannot be easily captured by microscopes, telescopes and other scientific instruments, but things that are too large or too small to be captured, like the whole universe and the tiniest of subatomic particles; things that are “too universal” and therefore indiscernable, like the homogeneous medium of which reality consists; and things that are “too subjective”, like human consciousness, human emotions, and so-called “pure qualities” or qualia. Because mathematics has thus far offered no means of compensating for these scientific blind spots, they continue to mark holes in our picture of scientific and mathematical reality.

But mathematics has its own problems. Whereas science suffers from the problems just described – those of indiscernability and induction, nonreplicability and subjectivity - mathematics suffers from undecidability. It therefore seems natural to ask whether there might be any other inherent weaknesses in the combined methodology of math and science. There are indeed. Known as the Lowenheim-Skolem theorem and the Duhem-Quine thesis, they are the respective stock-in-trade of disciplines called model theory and the philosophy of science (like any parent, philosophy always gets the last word). These weaknesses have to do with ambiguity…with the difficulty of telling whether a given theory applies to one thing or another, or whether one theory is “truer” than another with respect to what both theories purport to describe.

But before giving an account of Lowenheim-Skolem and Duhem-Quine, we need a brief introduction to model theory. Model theory is part of the logic of “formalized theories”, a branch of mathematics dealing rather self-referentially with the structure and interpretation of theories that have been couched in the symbolic notation of mathematical logic…that is, in the kind of mind-numbing chicken-scratches that everyone but a mathematician loves to hate. Since any worthwhile theory can be formalized, model theory is a sine qua non of meaningful theorization.

Let’s make this short and punchy. We start with propositional logic, which consists of nothing but tautological, always-true relationships among sentences represented by single variables. Then we move to predicate logic, which considers the content of these sentential variables…what the sentences actually say. In general, these sentences use symbols called quantifiers to assign attributes to variables semantically representing mathematical or real-world objects. Such assignments are called “predicates”. Next, we consider theories, which are complex predicates that break down into systems of related predicates; the universes of theories, which are the mathematical or real-world systems described by the theories; and the descriptive correspondences themselves, which are called interpretations. A model of a theory is any interpretation under which all of the theory’s statements are true. If we refer to a theory as an object language and to its referent as an object universe, the intervening model can only be described and validated in a metalanguage of the language-universe complex.

Though formulated in the mathematical and scientific realms respectively, Lowenheim-Skolem and Duhem-Quine can be thought of as opposite sides of the same model-theoretic coin. Lowenheim-Skolem says that a theory cannot in general distinguish between two different models; for example, any true theory about the numeric relationship of points on a continuous line segment can also be interpreted as a theory of the integers (counting numbers). On the other hand, Duhem-Quine says that two theories cannot in general be distinguished on the basis of any observation statement regarding the universe."
http://www.megafoundation.org/CTMU/Articles/Theory.html

"I have lately received correspondence from another member which convinces me that confusion exists concerning both the structure and meaning of Newcomb's problem, AKA Newcomb's paradox. Newcomb's problem calls for one to infer, from given a set of well-defined conditions, which of two alternatives should be selected in order to maximize a certain monetary expectation. It is apparently the impression of some members that the correct solution is "obvious" unless a certain condition ("omniscience") is suspended, at which point all possible solutions are trivial conversions of unknowns into other unknowns. This, however, is where Newcomb's paradox enters the picture. The paradox evolves from higher level (meta-linguistic) consideration of mechanisms implied by the "obvious" solution, whatever that may be to a given solver; it is the upper floor of a split-level maze. The controversy exists solely among those who wander its lower corridors without being able to reach the ledges above, More's the pity, for there resides all meaning."
http://www.megasociety.com/noesis/44/intro.html

The Resolution of Newcomb's Paradox
...
"Let's sum it up. You can be modeled as a deterministic or nondeterministic transducer with an accepting syntax that can be diagonalized, or complemented by logical self-negation. ND can be modeled as a metalogical, metamechanistic programmatic agency, some insensible part of whom surrounds you in a computational space Including physical reality, but not limited thereto. This space is the mechanistic equivalent of the computative regression around which Newcomb's problem is essentially formulated. The existence of this space cannot be precluded by you on the grounds that you cannot directly observe it, nor can it be said by you to deny ND a mechanism of control and prediction of your thought and behavior. Additionally, you have an open-ended run of data which lowers to 1/ 8 the probability that NO is "just lucky". This implies that mechanism does indeed exist, and warrants the adjunction to the axioms of physics an independent, empirical physical axiom affirming that mechanism. This then implies that ND can predict or control human thought and behavior (a somewhat weaker implication, you will notice, than "omniscience"). ND possesses means, motive, opportunity...and you. You are "possessed" by Newcomb's Demon, and whatever self-interest remains to you will make you take the black box only. (Q.E.D.)

Do not let incredulity cheat you of understanding. It may be hard to accept the idea of a mathematical proof of the possibility of demons; if so, you may take solace from the fact that this does not of itself imply their actual existence. Remember that Newcomb's problem includes a hatful of hypothetical empirical data that may have no correspondents in your sector of reality. The proof just given merely prevents you from generally precluding the paranormal experiences reported by others on logical grounds. That is, it restrains you from any blanket distribution of too narrow a brand of "rationalism" over reality. Because I have long been exploring the ramifications of the logic this proof employs, I already know how easy it would be to deal with whomever might dispute it on "scientific" or other grounds, regardless of their prestige or reputed intelligence. There is a certainly a strong correlation between rationalistic dogma and the persecution of its foes. Fortunately, it is not too late for "rationalism" to catch up with the advances of twentieth-century logic.

Regarding free will, consider that demons are generally reputed to offer a material reward for the surrender of one's soul. Where we define "soul" as the autoprogrammatic G-extension M' of one's characteristic transductive representation M, this means only that one is rewarded for voluntarily moving aside and letting the demon take over as captain of one's fate. He does this by adjoining to his subject any functional extensions he requires; e.g., d(dn), or m(mn). Or, if restricted determinism prevails but he is not in an especially predictive mood, he might create a parametric extension FND' (analogous to F') enabling a hyperdeterninistic override m ? m of one's deterministic output function. Actually, volition may be irrelevant; one might be shoved aside rather than nudged. Where freedom is defined negatively on restricted determinacy, it is duly stratified.

It has by now occurred to most readers that "demon" is a darkly connoted term that could just as well be replaced with "guardian angel" or something equally comforting. After all, ND does not ask his subjects to commit evil, but only to let him do them a rather large favor with no apparent strings attached. Insisting on "free will" here is reminiscent of the child who stares defiantly at its mother while touching the hot stove she has just finished telling it to avoid. If the child's act has any meaning at all, it is just that poorly timed independence can lead to more painful dependency down the line (surprisingly, Isaac Asimov advocates just this kind of defiance in the Newcomb context, and on what he seems to feel are ethical grounds; but we will deal with ethics later). One cannot be willingly "enslaved" by ND before being enslaved by the money he offers. Who among us always behaves in willful defiance of economic reality? My heart goes out to him, for he is either "enslaved" by an institution, or he is at the absolute mercy of the spontaneous, unsolicited charity of those whose own demons generally'predispose them to pass him by.

Having quenched the fire, we will now "mop up". Professor Nozick himself has remarked on the possible nonrecursivity of scientific laws involved in the determination of behavior. This, he surmises, prevents the inference of predictability from determinism. But this conclusion entails the confinement of these predicates within a restricted syntax, whereas the extensionality of G renders such a constraint powerless. In other words, restricted recursivity is beside the point for agencies defined to possess sufficient access to the channels through which such laws project as restricted reality, and particularly to their source. All ND has to do is run a G-presimulation of interstratum transmission. Note that the computat ive requirements for tills, which correspond to assumptions on the nature of G, are satisfied given ND's efficacy.

Nozick also remarks on the regression of determinism to self-validating premises. Self-validating languages comprise their own inferential schemata. Validation, being inferential to the extent of quantification, follows the G-stratification of inference. So self-validation is also stratified, and such languages correspond to appropriate levels of computative reality. For sufficiently expressive languages, self-validation becomes diagonalistic self-vitiation. Determinism is thus stratified, and so is its counter­part, freedom. The seemingly infinite regression can be brought to closure at an ultimate form of undecidability, which in turn can be locally resolved with a relativized degree of confirmation."
http://www.megasociety.net/noesis/44/newcomb.html

"12) The Resolution of Newcomb's Paradox allows for any arrangement of chooser and predictor. These designations are meaningful only relative to each other. That particular arrangement in which the predictor dominates the chooser happens to be set by the canonical formulation. In Noesis 45, the theory of metagames was incorporated into the CTMU. This theory, by merging the best interests of the players, allows them to cooperate in nondominative situations where "choosers" and "predictors" are indistinguishable. This idea was further developed in Noesis 48. The bottom line is, if neither player is "predictive" relative to the other, then there exists a jointly accessible algorithm which lets them cooperate to achieve mutual benefits (the theory of metagames). This prevents what George calls a "chaotic exchange of move and counter-move," provided both players know of it. But if one is dominant, as the formulation stipulates, then the situation is identical to that of The Resolution. These possibilities are exhaustive and mutually exclusive, so no further explanation is necessary. George's attention to specific arguments is commendable, but unnecessary resolve the original version of Newcomb's paradox. It can't be used to justify a claim of superiority or priority, particularly when it completely ignores the theory of metagames."
http://www.megasociety.com/noesis/58/05.htm

A DAY IN THE LIFE OF JOJO EINSTEIN, STREET CLOWN
continued

By C. M. Langan

"Refilling his maw, he thanked providence for small favors. Why, the kid might even have subjected him to some kind of Ph.D. thesis on the mysteries of Bayesian paradox! Such paradoxes apply mainly to sets of differently-relativized probabilities, not to localized calculations like those involving the die and the box of marbles. A given subject either has information or not; if so, he either has evidence of its relevance or not. If not, it's useless. Beyond observation, he has only CTMU probability theory and telekinesis. At least the kid had seemed to understand that probabilities are by definition relative: none of them remain valid in the event of a loss or extension of relevant data. The only invariants are certainties, and even these finally obey the same logic as probabilities. You can take the objectivity out of the subject, but you can't take the subjectivity out of objective reality.

Hey, maybe the kid's astronomical IQ had meant a little something after all. At least the boy wonder had known better than to fabricate the initial probability distribution. That, mused Jojo as his tongue batted a lump of Haagen-Dazs around his mouth, would have been like speculating on how many angels could do the lambada on the head of a pin. Where every initial probability distribution has an equal claim to validity, there is no choice but to settle for their average. Probability distributions are represented by curves plotted against the x and y axes, each curve bounding an equal area of total probability. The average of the curves is a flat line, and the area beneath it is evenly distributed along the horizontal axis of independent possibilities, discrete or continuous. The curve isn't bell-shaped, ball-shaped, or banana-shaped, and Jojo was grateful to have been spared the full treatment.

Then again, even though the term "average" reflects the symmetry obtaining among exclusive possibilities in the total absence of information, it sometimes needs qualification. Say you have a Markovian system of co-dependent events. You may want to seek its "attractors" by interpolation and/or extrapolation (which can also be viewed as a kind of "averaging"). There might be a number of different ways to do this; which one is best can be known only by virtue of -- you guessed it -- informative data pertaining to the particular system you're observing. Assuming that this system is deterministic, the attractor guiding it towards its successive equilibria might be "strange", chaotic, and fractally regressive. That can complicate things. But in any case, the amount and nature of your information determines not whether you should symmetrize, but merely how. No exceptions allowed; by definition, every probabilistic algorithm involves some kind of symmetry, even at the deterministic limit where the "probability" goes to unity! The kid had also seemed to understand the importance of information, and the necessity to symmetrize when you run out of it. That was impressive, if only because it isn't clear from the perspective of standard information theory. Info is more than bits of transmitted signal; it's a dependency relation recognized by an acceptor, and its "transmission" is just lower-order recognition on the part of any acceptor defined to include the source, channel and receiver. Information thus describes every possible kind of knowledge. Since objective reality can be known only as knowledge, it must be treated as information.

Over the centuries, different thinkers have come up with some pretty funny ideas concerning what reality "really is", and have been ridiculed by their children and grandchildren for what they thought. But defining information on cognition, and cognition as an informative function of information, ensures that whatever else reality turns out to be -- ether, phlogiston, or tapioca pudding -- it can always be consistently treated as information for cognitive purposes. Since cognition is what science is all about, what's good enough for one is good enough for the other. So if the equation of reality and information is a "joke", reasoned Jojo, our chuckle-happy descendants will be the butts of it no less than we. And anyway, they'll probably be too busy crying about the barren, overpopulated, tapped-out ball of toxic sludge they've inherited from us "comedians" to laugh about much of anything.

The cognitive universality of information implies cognitive symmetry or stasis in its absence. The counterpart of symmetry, asymmetry ("not of like measure"), obtains among variables which differ; to be used in any computation, this difference must be expressed over the qualitative range of a set of distinct differentiative predicates, or the quantitative range of at least one differentiative predicate. If you can specify no such differential scale, then no such scale exists for the purpose of your computation. And with no such scale to aparametrize the function assigning weights to the various possibilities, the function can't assign different weights. That leaves you with non-asymmetry or symmetry. As information arrives, this symmetry can be "broken". But information alone can break it, and you're stuck with it until then.

Sameness can sometimes be deduced, in which case it has informational value. But as an effect of symmetrization, sameness means that no differentiative information is available. Whenever there's no info on hypothetical arguments, the corresponding variables must remain open for computative purposes. Symmetrizing the object variables themselves -- for instance, by assigning the same content to each of them -- entails the risk of creating false information; these identical values might be wrong. But symmetrizing on a higher level of induction -- e.g., by equalizing the distributions of possible values of algorithmic parameters -- leaves the variables open. They remain objectively neutral, unbiased, and free of potentially misleading false information.

This unbiased neutrality is a rock-hard inductive criterion. If it orders crepes suzette, it had better not get lumpy oatmeal. And what it craves is the kind of higher-order symmetrization just described. The distinction between higher-order and lower-order symmetrization is subtle and easy to miss, which explains why many probability theorists don't understand how or why they should symmetrize. But then again, your average ankle-biter doesn't understand why he shouldn't rape the cookie jar before every meal, either. And that, concluded the clown, is pretty much the reason why some folks find the "principle of equivalence (or insufficient reason)" hard to swallow, and why they become mired in paradoxes that can't be resolved without it.

Yeah, Jojo knew about Bayesian regression, a paradox generated by a pair of inconsistent assumptions. One is that each one-place predicate occurring in an n-fold observation of r objects is equally likely; the other is that each n- or r-place predicate involving these one-place predicates is equally likely. In the first case, you're likely to observe each possible color the same number of times. In the second, you're as likely to observe any subset of colors more or less often than other subsets. The question is, which assumption should you make in the absence of definite information regarding either one? Jojo, who had been kicked out of MIT for pie-facing the chancellor[1], didn't need a degree to unkink that one. All he had to know was higher-order predicate logic, courtesy of that sidesplitting vaudeville-era comedy team, Russell and Whitehead. Their act, Principia Mathematica, had played in London, Paris, and Reno, and Jojo had it on the original 78.

See, the gist of it was, predicate logic is stratified. Each successive order includes all those below it, but none of those above it. What that means to probability theorists is as clear as benzene-laced mineral water: it's impossible to express dependency relationships which violate this rule, known to aficionados as the theory of types. It was designed to avoid the negative circularities called paradoxes or antinomies. It manages to accomplish this by "breaking the loop" of language by which such paradoxes are formed. But here's the sweet part: this same mechanism also prevents artificial tautologies, circular inferences which justify themselves like so many philosophers at a teach-off. Jojo chuckled inwardly: cogito ergo sum quod cogito quod sum quod cogito quod... I think, therefore I am what I think what I am what I think what... Somewhere along the line, you've got to admit a little "objective reality" into the deal! Or, like a snake who lost his coke bottles, you swallow your own metaphysical tail and roll away into the never-never land of fantasy and solipsism, a scaly hula-hoop of empty assumptions powered by a perpetual motion machine.

On the other hand, it's just as stupid to think you're the Solomon who can meat cleaver a Haitian divorce for the subjective and objective parts of reality. It takes two to tango, and this pair dances in a crazy-glue lockstep. Neither one can even be defined without the other. Kant knew it, Russell knew it, and even that hilarious Austrian Godel knew it. But quite a few other people didn't seem to know it. Take, for example, the vast majority of practicing logicians, mathematicians, and empirical scientists. Sure, a lot of them paid it the "correct" amount of lip service... apparently to apologize for not being able to make any sense out of it. But it's real clear and real unequivocal, and nobody has anybody but himself to blame if he talks the talk before he can walk the walk. The whiz kid, who'd started out like some kind of solipsist, had probably been as thoroughly sucked in by modern scientific pseudo-objectivism as all those Joe Average "dummies" he habitually surveyed from on high!

Even type theory itself was in some ways a bust. It was meant to rid logic and mathematics of paradox, but did so only by shifting paradox into the higher realm of undecidability theory. When Godel delivered that little punch line, Russell felt like the joke was on him. That, lamented the clown as he smeared mustard on his next kosher pork delight, was a shame. Had Jojo been around, he'd have reminded his good pal Bert of a cardinal rule of comedy: you have to learn to tune out the hecklers. Hecklers frequently knock comedians for telling jokes over their heads, and science is full of critics ready to pounce on any theory they're too dumb, lazy, or preoccupied to understand. Anybody who comes up with anything really brilliant in this world learns that from Jump Street.

Type Theory can't eliminate paradox, because paradox is a condition of human mentality. Every problem on every IQ test ever invented can be formulated as a paradox, and every solution is the resolution of a paradox. Man is a problem-creating, problem-solving animal. Eliminate paradox, and you eliminate human mentality. It was amazing, the lengths that some of these goofs would go to in order to avoid a little common sense! Russell could have yanked victory from the jaws of defeat merely by reformulating the purpose of his theory. Instead of claiming that it eliminated all logical paradox, he could simply have claimed that it placed certain logical restrictions on paradox-resolution...and therefore on solving the logico-mathematical and scientific problems that can be expressed in paradoxical terms. And guess what, whiz-kids of the world, thought Jojo as he horked down a gigantic bolus of garlic and gluten: that just so happens to cover problems like my little box of marbles. Patting his side, he was answered by the reassuring rattle of ten glass globules. He silently blessed each and every one of their stony, money-making little hearts.

Because paradox is a condition of subjective mentality, being the self-referential basis of temporal consciousness, it also applies to whatever is observed by the subject -- in other words, to so-called "objective reality". And to ice the cupcake, it also applies to the subject's conception of his relationship to objective reality! That was Godel's insight. Unfortunately, old Kurt, in his zeal to yank the banana peel out from under old Bert, went light on the scientific ramifications of his undecidability gag. This left the audience with a bunch of cracked and leaking coconuts where their wise and learned brains had been, and science was able to skate right along like nothing had happened. But see, something had gone down after all, and Jojo couldn't decide whether to laugh or die about it. But so what? he thought as he wiped his de-gloved paws on the window curtains next to his table. At least he knew he'd always be able to make a buck off it.

But hey, money wasn't everything, even in this city. There were other important things to remember. Like the purposive connection between the theories of Types and Relativity. See, Jojo wasn't the only famous Einstein who shared Russell's concern with logic. Herr Albert had something very similar in mind when he adopted the principle of locality in order to avoid causal paradoxes in which super-luminal agencies make observations and then change the way they happen. The basic idea was the same: effects cannot negate their causes. Causes can negate certain effects by way of promoting others, but not vice versa. And more to the point at hand, the output of a function or algorithm cannot post hoc tend to negate the function or its parameters...e.g., the initial distribution on which observations depend. Where the output has already been observed, this forces the symmetrization of all the "causes" (e.g., chromatic likelihoods) from which it may have resulted. Deduction eliminates possibilities; but whenever an inductive context can be arbitrarily extended relative to a given hypothesis, the inductive process must respect all possibilities by obedience to the theory of types.

In other words, what type theory said about Bayesian regression was this. Bayes' rule is an algorithm, or complex logical function ascribing one of a set of predicates (probabilities) to a variable causal argument. It was designed not only to derive a conditional probability, but to do so without contaminating relevant data with irrelevant assumptions. The latter amounts to the avoidance or prior resolution of any paradoxical inconsistency between such assumptions and the reality which is being measured: if you assume something that favors any particular outcome representing a fraction 1/n of possible outcomes, you will only be right 1/n of the time. That amounts to a paradox (n-1)/n of the time, a situation that has to be avoided wherever possible.

Type theory avoids it by directionalizing the inclusory or attributive dependencies in the formulation. A variable standing for a color is of higher type than one representing an object. A variable standing for a chromatic distribution is of higher type than one representing a color. And a variable standing for a probability distribution (of chromatic distributions of colors over objects) is of higher type than one representing a chromatic distribution. Each one of these variables gets to determine those beneath it, but not with or above it. So you can't take an assumption about a chromatic distribution -- say, that each color is or is not equally likely to be observed in sampling -- and try to determine the probability distribution from it. Since Bayes' rule requires initial information on the probability distribution, you have to forget about the likelihoods of specific colors and start from the probability distribution itself. That means that where you have no prior information on the probability distribution, you have to symmetrize or flatten it. Each chromatic distribution is thus equally likely, and the corresponding range of individual chromatic likelihoods is open and unbiased. See, Ma? No loops.

Jojo recalled a related question involving the states of systems in probabilistic contexts. Prior to extended observation, how do you know how to partition the set of possibilities you intend to symmetrize? Take an urn containing two marbles. What's the probability that the marbles have different colors? Are the possible states enumerated combinatorially: xx, xy, yy? Or permutatively: xx, xy, yx, yy? Jojo chuckled, almost choking on a pickled egg. Probabilities, states, and difference relations compute solely as information. In CTMU probability theory, information about states is subject to the same constraints as that expressed in terms of states. There are 3 states in a combinatorial urn. There are 4 states in a permutative urn. Since 4 > 3, the permutative state-set has more information. But just how has this extra permutative info been acquired? The down and dirty truth of it is, it hasn't been "acquired". It's been fabricated.

Most probabilistic algorithms need at least combinatorial info to work. If you're going to go without observation and fabricate information about states, it has to be at least combinatorial. But you don't dare help yourself to extra unconfirmed info! So the urn's states are initially combinatorial, and p = 1/3. In other words, states defined on individual objects and ordered relations among objects don't get to determine higher-order combinatorial predicates of whole sets. Only observation is "high enough" to do that. And that, concluded Jojo as he shoveled ice cream in a last-ditch attempt to quench the egg's fiery -- no, radioactive -- aftertaste, was that. Glory Be to the CTMU unification of probability theory and higher-order predicate logic! Trying to get over on the CTMU is sort of like trying to outsmart yourself: you lose if you win, and win if you lose. The clown wheezed pitifully. Where the heck could a mere egg-pickler be getting plutonium-239?

The necessity for combinatorial information in purely subjective computations is a demand of our cognitive syntax...the way we see the world and organize it mentally. Such demands determine all the mathematical models we construct to represent and simulate reality. Because they inhere in every such model, they are a basic ingredient in all calculations based on such models, "objective probabilities" included. The fact is, since all probabilities have subjective components, there are no grounds for refusing to recognize subjective probabilities as "real". Is the above probability highly subjective? Yeah. Is it only relatively valid? You know it. But while some bananas are strictly by Goodyear, a chimp has to eat. And the only way to chew this particular bunch is to recognize that all probabilities regress, on data reduction, to statements about the semantical relationship of subjective cognitive syntaxes to that of "objective reality". Such statements, if extremely general, nonetheless express a real relationship between two realities, you and your environment. The only way they aren't real is if you aren't real. And if you're reading this, you're as real as Cheetah, Bonzo and all their offspring combined.

The Bayes algorithm is formulated in a certain metalanguage, and its variables occupy specific levels within it. Any universe to which the rule is applicable must conform semantically to its internal stratification. By virtue of type, the dependency relationships among Bayesian variables are asymmetric and directional. This also applies to more specialized rules like the Schrodinger equation, which are merely empirical-slash-subjective (a posteriori/a priori, mathematical) evolutions of Bayes' rule under given kinds of empirical input. In this sense, Bayes' rule is the universal first step in scientific inference, and associated at a very deep level with the human mental syntax itself. That's why Bayesian inference, Bayesian paradox, and Bayesian regression are central issues in reality research, and why anybody who complains that they're bored or irritated with these topics belongs on Monkey Island with the rest of the big time mental heavyweights. And that, mused Jojo, goes double for anybody unwilling to grasp the topic at even the general level on which he was thinking about it. He was glad that he could keep his thoughts to himself instead of having to run a hard-sell on some bunch of self-styled "experts".
...
See, everybody's an expert, and everybody has his or her own ideas about what is or is not "rational", "imaginable" (see Noesis 58, p. 17), "possible" (see Noesis 46), "probable", and so on for every subjective context. This can apply even to hard, extensional contexts like truth and certainty. But Jojo had noticed that whenever people bother to justify such opinions, they always fail to come up with anything remotely resembling a formal derivation. That's because no categorical derivation is possible with respect to anything but the most general features of the human cognitive syntax. Instead, they offer "plausible arguments" on which there "seems to be a consensus" among like-minded cognoscenti. Translation: mildly denatured crapola, with 99.9 percent certainty.

Jojo, not content to settle for any narrow field of expertise, was an expert on experts, and knew that no expert could determine diddley except relative to his own axioms and rules of inference. That naturally precludes any value judgment on anything involving additional axioms, rules, or data. Some things are easily recognizable as nonsense because they oppose known facts at identical levels of specificity: if somebody were to claim that the War of 1812 was fought in 1922, he'd plainly be a bag of gas. But as the situation gets more complex, it rapidly gets very difficult to make this kind of determination. If a critic isn't extremely careful, he risks becoming just another old curmudgeon with antiquated notions about what rules the universe is "required to obey".

Take Newcomb's paradox, a generic anomaly designed to cut off the ring on such lamebrain arguments. Newcomb's paradox is just an arbitrary version of a situation that has arisen, and will no doubt continue to arise, countless times in the always-surprising annals of science: data is received which appears to contradict a currently accepted set of axioms and rules of inference. That much is obvious, since otherwise there would be no point to it. But the clever way in which the formulation evokes the issues of free will, rationality, and megabucks seems to have short-circuited the faculties of many of those who tried to solve it. They treated it as nothing but a real or phony get-rich-quick scheme instead of as a paradigm for empirical and theoretical induction! Even when this big bad paradox was finally resolved by Jojo's personal amigo and sometimes-biographer Chris Langan, everybody played ostrich, 'possum, and downright deaf and dumb rather than admit they might have "overlooked" something. Given the fact that some of them claimed ultrahigh IQ's and had a lot to lose by imitating the furniture, Jojo found this...well, unimaginable, improbable, and incredible. But true. Langan had belonged to a society of whiz-kids when he resolved Newcomb's paradox and certain other notorious inconsistencies. The poor guy had been led to believe that he would thus be assured of a fair and insightful hearing for his ideas. But he'd been in for a minor disappointment. Jojo recalled a few notable examples: one member had implied that Langan's ideas were less than original because one partial aspect of them had been "proposed" by somebody else who apparently had a limited grasp of the logico-mathematical complexities they entailed...many of which nobody but Langan seems to have even considered. Another had adopted the computative terminology of Langan's Resolution to undermine the work itself, an exercise in one-man foot-shooting. And yet another had denigrated Langan's contributions because "others" had expressed disapproval, telling him that his request for an apology amounted to "killing the messenger"! Instead of presenting their arguments in ways that invited a response, these critics all seemed to dismiss out of hand the idea that any suitable response would be possible. The fact that none of them offered so much as one unqualified sentence in support of his concise and incisive analyses had sort of bugged Langan. But it really shouldn't have. Because if he'd known whiz-kids like Jojo knew whiz-kids, he'd have expected nothing but the kind of meaningless and circular bickering he got. And he'd have made a laff riot out of it, just like Jojo with his whiz-kid.

Then again, Langan's critics didn't have to look so bad. Some of their remarks were relatively insightful. The problem was, they offered their criticisms after Langan had already answered them! For example, it was remarked -- after Langan had made it clear that the standard type-theoretic resolution of the Epimenides paradox could be transformed into a resolution based on CTMU many-valued logic upon real time violation -- that this paradox "would really lead to a (CTMU-invalidating) contradiction" if one were forced to assign one of only two truth values to every statement. One of the main ideas of the CTMU, if Jojo wasn't badly mistaken, was to define a paradox-resolvent stratification of truth functions generating a potential infinity of truth values (top paragraph, page 9, Noesis 44)! Good thing Langan had a sense of humor.

Part of the beauty of the CTMU was the way many-valued (modal) logic was so naturally applied to reality in the stratified computational model. The two-valuedness of the human accepting syntax delineates only that part of global reality constrained by human conceptual and observational limitations. For each level [gamma]n of [gamma], there is an nth truth value. Truth values are related according to the inductive and deductive relationships holding within and among the levels of [gamma]. Langan had made this as clear as possible, given limitations in space and reader attention.

Someone else had made the observation that using the CTMU to resolve Newcomb's paradox was "like using a B-2 for crop dusting"! Newcomb's paradox, a conundrum relating physics, decision theory, and the philosophy of free will within a metaphysical matrix, had defied resolution for over twenty years by the time Langan wrapped it up. The reason? Physics, decision theory, and the philosophy of free will is to crop dusting what botanical genetic engineering is to spreading fertilizer. Along with the other matters to which it was closely related -- e.g., quantum reality -- Newcomb's paradox had all but smothered to death under a two-decade accumulation of "fertilizer" before Chris Langan rescued it with his CTMU. It was even denied that quantum non-locality confirmed the CTMU, despite the equation of non-locality and metrical violation. Langan had already explained that metrics, being parts of the computative syntaxes of dynamical transducers, could be computationally relativized to nested transducers in such a way as to resolve these violations while affording an elegant explanation of quantum wave function collapse. Man oh man, the trials that Chris Langan had been made to endure! Genius could be a long row to hoe for someone with no fat cat academic "mentors" willing to sponsor his radically superior ideas for a prestigious and potentially lucrative slice of the credit (funny, thought Jojo, how the sacred responsibility of such mentors to "protect the integrity of their disciplines" so often seemed to line up with the protection of their personal funding and reputations). The clown suffered silently on behalf of his sadly misunderstood buddy, who had foolishly dared to weigh in on the scales of truth with no more than the certainty of being right. Talk about your Don Quixotes!"
http://www.megasociety.net/noesis/63.htm

Good and real: demystifying paradoxes from physics to ethics

"Stripped of any intellectual pretension, philosophy just grapples with the ancient, eartfelt questions What the fuck is going on here? and What the fuck am I doing here? -- what is the nature of reality, and what is the nature and purpose (if any) of our place in it? We all strive to answer such questions somehow or other. Here goes one more try."
http://tinyurl.com/demystifying-paradoxes-from-ph

Book review: Good and Real: Demystifying Paradoxes from Physics to Ethics by Gary Drescher.

"This book tries to derive ought from is. The more important steps explain why we should choose the one-box answer to Newcomb’s problem, then argue that the same reasoning should provide better support for Hofstadter’s idea of superrationality than has previously been demonstrated, and that superrationality can be generalized to provide morality. He comes close to the right approach to these problems, and I agree with the conclusions he reaches, but I don’t find his reasoning convincing.

He uses a concept which he calls a subjunctive relation, which is intermediate between a causal relation and a correlation, to explain why a choice that seems to happen after its goal has been achieved can be rational. That is the part of his argument that I find unconvincing. The subjunctive relation behaves a lot like a causal relation, and I can’t figure out why it should be treated as more than a correlation unless it’s equivalent to a causal relation.

I say that the one-box choice in Newcomb’s problem causes money to be placed in the box, and that superrationality and morality should be followed for similar reasons involving counterintuitive types of causality. It looks like Drescher is reluctant to accept this type of causality because he doesn’t think clearly enough about the concept of choice. It often appears that he is using something like a folk-psychology notion of choice that appears incompatible with the assumptions of Newcomb’s problem. I expect that with a sufficiently sophisticated concept of choice, Newcomb’s problem and similar situations cease to seem paradoxical. That concept should reflect a counterintuitive difference between the time at which a choice is made and the time at which it is introspectively observed as being irrevocable. When describing Kavka’s toxin problem, he talks more clearly about the concept of choice, and almost finds a better answer than subjunctive relations, but backs off without adequate analysis.

The book also has a long section explaining why the Everett interpretation of quantum mechanics is better than the Copenhagen interpretation. The beginning and end of this section are good, but there’s a rather dense section in the middle that takes much effort to follow without adding much."
http://www.bayesianinvestor.com/blog/index.php/tag/superrationality/