Saturday, January 08, 2011

Isoperimetric Inequalities, General Covariance, Autologic Selection Parameters, Corecursive Substitutability, Equifinal Symmetry



Image: http://news.nationalgeographic.com/news/2010/02/photogalleries/100201-fluid-motion-winners-pictures/

"The theory of groups is a branch of mathematics in which one does something to something and then compares the results with the result of doing the same thing to something else, or something else to the same thing."
http://www.wordiq.com/definition/Group_theory

"I can think of no more striking example of isotely, the attainment of the same end by different methods in different groups, than these manifold methods of producing a single color." http://tinyurl.com/4u6p6kd

Parity is a concept of equality of status or functional equivalence. It has several different specific definitions.
http://www.thefullwiki.org/parity

Isotelic (adj.): Referring to factors that produce, or tend to produce, the same effect:
http://tinyurl.com/4s42e2a

Equifinal: That have the same outcome, end or result
http://en.wiktionary.org/wiki/equifinal

Corecursion: The dual to recursion, that acts on the computed result, rather than the input.
http://en.wiktionary.org/wiki/corecursion

"This equation states that iterating a function and then mapping it gives the same result as applying the
function and then iterating it, namely an infinite list of the form:

f x : f (f x) : f (f (f x)) : f (f (f (f x))) : ···

But how can the map-iterate property be proved? Note that the standard method of structural induction on lists is not applicable, because there is no list argument over which induction can be performed. In the remainder of the article we review and compare the four main methods that can be used to prove properties of corecursive functions, using the map-iterate property as our running example."
http://www.cs.nott.ac.uk/~gmh/corecursion.pdf

"This paper gives a treatment of substitution for "parametric" objects in final coalgebras, and also presents principles of definition by corecursion for such objects. The substitution results are coalgebraic versions of well-known consequences of initiality, and the work on corecursion is a general formulation which allows one to specify elements of final coalgebras using systems of equations. One source of our results is the theory of hypersets, and at the end of this paper we sketch a development of that theory which calls upon the general work of this paper to a very large extent and particular facts of elementary set theory to a much smaller extent."
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.7061

"The theory of motives was introduced by Grothendieck as a way to explain cohomological arguments in algebraic geometry (and especially such theorems as independence of the results from the specific cohomology theory used) geometrically.

Integration, at first at least, really meant integration in the Lebesgue sense. Motivic integration interpolates p-adic integration. In the more recent developments, the integrals have a more formal character along the lines of the integrals in de Rham cohomology rather than those defined measure theoretically."
...
Most of us here are aware that motivic integration is a deep subject and owes much of its efficacy to model theory, but even after nearly a decade and a half of conference presentations and seminars its aims and inner workings have not been internalized by model theorists."
http://math.berkeley.edu/~scanlon/papers/scanlon_durham_motivic_integration_outsiders_tutorial.pdf

Purposefulness: "correspondence of a phenomenon or a process to a specific (relatively complete) condition whose concrete or abstract model assumes the form of a goal, or purpose. Purposefulness is regarded on the one hand as the immanent, or internal, relationship of an object to itself, and on the other hand as a certain relationship in the realm of the interaction of object and subject. As a relationship that is characteristic of human activity, purposefulness may also be defined as a scientific principle for the study of the structure and functions of self-regulating and equifinal systems (that is, systems that can yield the same end results regardless of initial conditions)."
http://encyclopedia2.thefreedictionary.com/sense+of+purpose

"The virtues of humanity are many but science is the most noble of them all. The distinction which man enjoys above and beyond the station of the animal is due to this paramount virtue. It is a bestowal of God; it is not material, it is divine. Science is an effulgence of the Sun of Reality, the power of investigating and discovering the verities of the universe, the means by which man finds a pathway to God. All the powers and attributes of man are human and hereditary in origin, outcomes of nature's processes, except the intellect, which is supernatural. Through intellectual and intelligent inquiry science is the discoverer of all things. It unites present and past, reveals the history of bygone nations and events, and confers upon man today the essence of all human knowledge and attainment throughout the ages. By intellectual processes and logical deductions of reason, this super-power in man can penetrate the mysteries of the future and anticipate its happenings." `Abdu'l-Bahá on Science and Religion
http://info.bahai.org/article-1-5-3-1.html

"The process of reducing distinctions to the homogeneous syntactic media that support them is called syndiffeonic regression. This process involves unisection, whereby the rules of structure and dynamics that respectively govern a set of distinct objects are reduced to a “syntactic join” in an infocognitive lattice of syntactic media. Unisection is a general form of reduction which implies that all properties realized within a medium are properties of the medium itself.

Where emergent properties are merely latent properties of the teleo-syntactic medium of emergence, the mysteries of emergent phenomena are reduced to just two: how are emergent properties anticipated in the syntactic structure of their medium of emergence, and why are they not expressed except under specific conditions involving (e.g.) degree of systemic complexity?"
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf

"I was thinking about the recent discussion of anticipatory systems and the way it touches upon aspects of such systems which seem to have teleological overtones.

It brought to mind a reference in Essays by Rosen:

"...it was early recognized that the variational principles of physics, which we reviewed above, seems already to violate this maxim; we need both a present and a future configuration to determine a path. Thus, the Principle of Least Action, say, which is at the very heart of theoretical mechanics, looks more telic than mechanics itself allows. This has always bothered people, and many have taken the trouble to try and rationalize it away on various grounds, which we need not pause to review here. But these facts point to a perhaps deep relationship between the nature of optimality principles in general and the things we do not understand about organic phenomena." [EL 216]

This led me to review Ernst Mach in his book "The Science of Mechanics", on the topic of the Principle of Least Action. Mach remarks:

"Euler's view is, that the purposes of the phenomena of nature afford as good a basis of explanation as their causes. If this position be taken, it will be presumed a priori that all natural phenomena present a maximum or minimum. Of what character this maximium or minimum is, can hardly be ascertained by metaphysical speculations. But in the solution of mechanical problems by the ordinary methods, it is possible, if the requisite attention be bestowed on the matter, to find the _expression_ which in all cases is made a minimum or maximum. Euler is thus not led astray by metaphysical propensities, and proceeds much more scientifically than Maupertuis. He seeks an _expression_ whose variation put=0 gives the ordinary equations of mechanics." [ch. III, sec VIII.5]

This view of purpose as a basis of explanation seems striking to me. This brings up an intriguing notion, stated 3 different ways below:

- Is there some _expression_ which can be ascertained and which will be a minimum (or maximum) for biological organisms?
- Is there some variational principle at work (no pun intended) for organisms?
- Is there an overarching "purpose" (in the sense in which Mach and Euler use the term) which organisms abide by, and can thus be discerned "if the requisite attention be bestowed on the matter"?"
http://www.panmere.com/rosen/mhout/msg00334.html

"The law of least action is oftentimes described as an economics principle for nature. Whatever happens in nature “economizes” – uses the least amount of – “action.” Maupertuis saw in his principle physical evidence for the existence of God. He wrote, “What satisfaction for the human spirit that, in contemplating these laws which contain the principle of motion and of rest for all bodies in the universe, he finds the proof of existence of Him who governs the world.” In writing Critique of the Teleological Power of Judgment, Kant set out to debunk this and other similarly specious applications of teleology in science.

When one thus introduces into the context of natural science the idea of God in order to make expedience in nature explicable, and subsequently uses this expedience in turn to prove that there is a God, then there is no intrinsic understanding of both sciences, and a deceptive diallelus3 brings each into uncertainty by way of letting their boundaries overlap [KANT5c: 253 (5: 381)].

Present day physics, of course, agrees with this statement and does so to such a complete degree that the subject is largely regarded as not even worth mentioning. The modern day heir to Maupertuis’ principle is Hamilton’s principle in its various formulations (among which is one form still known as the principle of least action). As one graduate level physics textbook notes,

The principle of least action is commonly associated with the name of Maupertuis, and some authors call the integral involved the Maupertuis action. However, the original statement of the principle by Maupertuis was vaguely teleological and could hardly pass muster today. The objective statement of the principle we owe to Euler and Lagrange.

Hamilton’s principle is one of two principles regarded by modern day physics as being at or very near to the most fundamental of all physical explanations. Feynman remarked:

. . . [In] Einstein's generalization of gravitation Newton's method of describing physics is hopelessly inadequate and enormously complicated, whereas the field method is neat and simple, and so is the minimum principle.5 We have not decided between the last two yet.

In fact it turns out that in quantum mechanics neither is right in exactly the way I have stated them, but the fact that a minimum principle exists turns out to be a consequence of the fact that on a small scale particles obey quantum mechanics. The best law, as presently understood, is really a combination of the two in which we use the minimum principle plus local laws. At present we believe the laws of physics have to have the local character and also the minimum principle, but we do not really know . . .

One of the amazing characteristics of nature is the variety of interpretational schemes which is possible. It turns out that it is only possible because the laws are just so, special and delicate. For instance, that the law is the inverse square is what permits it to be local; if it were the inverse cube it could not be done that way. At the other end of the equation, the fact that the force is related to the rate of change of velocity is what permits the minimum principle way of writing the laws. If, for instance, the force were proportional to the rate of change of position instead of velocity, then you could not write it in that way. If you modify the laws much you find that you can only write them in fewer ways. I always find that mysterious, and I do not understand the reason why it is that the correct laws of physics seem to be expressible in such a tremendous variety of ways. They seem to get through several wickets at the same time [FEYN2: 54-55]."
http://www.mrc.uidaho.edu/~rwells/Critical%20Philosophy%20and%20Mind/Chapter%2016.pdf

"S. Majid: On the Relationship between Mathematics and Physics:

In ref. [7], S. Majid presents the following `thesis' : ``(roughly speaking) physics polarises down the middle into two parts, one which represents the other, but that the latter equally represents the former, i.e. the two should be treated on an equal footing. The starting point is that Nature after all does not know or care what mathematics is already in textbooks. Therefore the quest for the ultimate theory may well entail, probably does entail, inventing entirely new mathematics in the process. In other words, at least at some intuitive level, a theoretical physicist also has to be a pure mathematician. Then one can phrase the question `what is the ultimate theory of physics ?' in the form `in the tableau of all mathematical concepts past present and future, is there some constrained surface or subset which is called physics ?' Is there an equation for physics itself as a subset of mathematics? I believe there is and if it were to be found it would be called the ultimate theory of physics. Moreover, I believe that it can be found and that it has a lot to do with what is different about the way a physicist looks at the world compared to a mathematician...We can then try to elevate the idea to a more general principle of representation-theoretic self-duality, that a fundamental theory of physics is incomplete unless such a role-reversal is possible. We can go further and hope to fully determine the (supposed) structure of fundamental laws of nature among all mathematical structures by this self-duality condition. Such duality considerations are certainly evident in some form in the context of quantum theory and gravity. The situation is summarised to the left in the following diagram. For example, Lie groups provide the simplest examples of Riemannian geometry, while the representations of similar Lie groups provide the quantum numbers of elementary particles in quantum theory. Thus, both quantum theory and non-Euclidean geometry are needed for a self-dual picture. Hopf algebras (quantum groups) precisely serve to unify these mutually dual structures.''
http://planetmath.org/encyclopedia/IHESOnTheFusionOfMathematicsAndTheoreticalPhysics2.html
http://philsci-archive.pitt.edu/3345/1/qg1.pdf

"An even more striking form of duality is encountered in graph theory, where the dual graph of a planar graph transforms faces to vertices and vertices to faces without disrupting its overall pattern of adjacencies. The boundary of each face is replaced by transverse edges converging on its dual vertex (and vice versa), and the adjacency relation is redefined accordingly. Where edges are given a temporal interpretation, interesting transformations can occur; e.g., circulations along facial boundaries become “vertex spins”, and motion along an edge can be characterized as an operation between the dual faces of its endpoints.

Duality principles thus come in two common varieties, one transposing spatial relations and objects, and one transposing objects or spatial relations with mappings, functions, operations or processes. The first is called space-object (or S-O, or S O) duality; the second, time-space (or T-S/O, or T S/O) duality. In either case, the central feature is a transposition of element and a (spatial or temporal) relation of elements. Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures. From this, we may extract a third kind of duality: ST-O duality. In this kind of duality, associated with something called conspansive duality, objects can be “dualized” to spatiotemporal transducers, and the physical universe internally “simulated” by its material contents.

M=R, MU and hology are all at least partially based on duality.
...
The Telic Principle differs from anthropic principles in several important ways. First, it is accompanied by supporting principles and models which show that the universe possesses the necessary degree of circularity, particularly with respect to time. In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely-separated events, lets the universe “retrodict” itself through meaningful cross-temporal feedback. Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility.
...
The primary transducers of the overall language of science are scientists, and their transductive syntax consists of the syntax of generalized scientific observation and theorization, i.e. perception and cognition. We may therefore partition or stratify this syntax according to the nature of the logical and nonlogical elements incorporated in syntactic rules. For example, we might develop four classes corresponding to the fundamental trio space, time and object, a class containing the rules of logic and mathematics, a class consisting of the perceptual qualia in terms of which we define and extract experience, meaning and utility from perceptual and cognitive reality, and a class accounting for more nebulous feelings and emotions integral to the determination of utility for qualic relationships. 44 For now, we might as well call these classes STOS, LMS, QPS and ETS, respectively standing for space-time-object syntax, logico-mathematical syntax, qualio-perceptual syntax, and emo-telic syntax, along with a high-level interrelationship of these components to the structure of which all or some of them ultimately contribute. Together, these ingredients comprise the Human Cognitive-Perceptual Syntax or HCS."
http://www.ctmu.net/

"What if Physical Reality is no different, created by the rules of looking at the world as a Scientist? In other words, just maybe, as we search for the ultimate theory of physics we are in fact rediscovering our own assumptions in being Scientists, the Scientific Method?

To explain why I think so, we need to think about the nature of representation. Imagine a bunch of artists gathered around a scene X, each drawing their own representation of it from their angle, style and ethos. Any one bit x of the scene is represented by the artist f of the collection as maybe a fleck of paint on their canvas. Now, the amazing thing — and this is possibly the deepest thing I know in all of physics and mathematics — is that one could equally well view the above another way in which the ‘real thing’ is not the scene X, which might after all be just a useless bowl of fruit, but the collection, X* of artists. So it is not bits x of X being represented but rather it is the artists f in X*. Each point x of the fruit bowl can be viewed as a representation of X* in which the artist f is represented by the same fleck of paint as we had before. By looking at how different artists treat a single point x of the fruit bowl we can ‘map out’ the structure of the collection X*.

What this is is a deep duality between observer and observed which is built into the nature of representation. Whenever any mathematical object f represents some structure X we can equally well view an element x of X as representing f as an element of some other structure X*. The same numbers f(x) are written now as x(f). In mathematics we say that X and X* are dually paired. Importantly one is not more ‘real’ than the other.

So within mathematics we have this deep observer-observed or measurer-measured duality symmetry. We can reverse roles. But is the reversed theory equivalent to the original? In general no; the bowl of fruit scene is not equivalent to the collection of artists. But in physics perhaps yes, and perhaps such a requirement, which I see as coming out of the scientific method, is a key missing ingredient in our theoretical understanding.

To put some flesh on this, the kind of duality I am talking about in this post is typified in physics by position-momentum or wave-particle duality. Basically, the structure of addition in flat space X is represented by waves f. Here f is expressed numerically as the momentum of the wave. But the allowed f themselves form a space X*, called ‘momentum space’. The key to the revolution of quantum mechanics was to think of X and X* as equally real, allowing Heisenberg to write down his famous Heisenberg commutation relations between position and momentum. They key was to stop thinking of waves as mere representations of a geometrical reality X but as elements in their own right of an equally real X*. The idea that physics should be position-momentum symmetric was proposed by the philosopher Max Born around the birth of quantum mechanics and is called Born Reciprocity. This in turn goes back (probably) to ideas of Ernst Mach."
http://www.cambridgeblog.org/2008/12/reflections-on-a-self-representing-universe/

"Teleology requires a "bracket out the object" methodology, as opposed to the "bracket out the subject" methodology of hard science. Information theory is ultimately a branch of epistemology, which brackets out neither the object or the subject, but instead studies the interface between them. The same subject-matter (e.g., biological systems) can be studied from the radically diverse points of view of teleology, science, and information theory. Nevertheless these radically diverse viewpoints can be related to one another by means of analogs.

For example, the following are analogs: a "decision between two possible choices" in teleology, a "nonlinear bifurcation" in science, and a "bit" in information theory.

The following are also analogs: in teleology, a "conscious, decision-making being", in science, a "nonlinear, hierarchical, complex physical system characterized at all levels by both external and internal conditional equifinality"; in information theory, a "creator and knower of information".

Because neo-Darwinism is essentially a linear, stochastic/deterministic theory which locates all biological chance at the microscopic level (e.g., "random mutations") and all biological determinism at the macroscopic level (i.e., "natural selection", resulting in differential rates of reproduction and mortality), its teleological analog is empty and featureless (i.e., "meaningless"), moreover it can give no coherent non-tautological account of how biological information originates.

By contrast, Robert DeHaan's non-linear theory of evolution, called macrodevelopment, is rich in hierarchical teleological analogs and, via self-organization theory, is capable of immanently accounting for both the storage and creation of information by the biosphere. (The transcendent creation of this same information by a God having no complete analog within the physical universe is a valid, complementary teleological point-of-view."
http://www.fellowshipofstbenedict.org/Fosb_site/Teleology_and_Information_in_Biology.pdf

"In either case, human beings are integral parts of reality that contribute to its structure, and must either be using the inherent freedom of reality to do so, or be freely used as tools by some higher level of realty to the same end. So while the using-versus-used question remains up in the air, one fact has nevertheless been rationally established: whether it belongs exclusively to the universe or to man as well, free will exists. (Q.E.D.)" http://www.scribd.com/Christopher-Michael-Langan-The-Art-of-Knowing-Expositions-on-Free-Will-and-Selected-Essays/d/31783507

"Equifinality is the principle that in open systems a given end state can be reached by many potential means. The term is due to Ludwig von Bertalanffy, the founder of General Systems Theory. He prefers this term, in contrast to "goal", in describing complex systems' similar or convergent behavior. It emphasizes that the same end state may be achieved via many different paths or trajectories. In closed systems, a direct cause-and-effect relationship exists between the initial condition and the final state of the system: When a computer's 'on' switch is pushed, the system powers up. Open systems (such as biological and social systems), however, operate quite differently. The idea of equifinality suggests that similar results may be achieved with different initial conditions and in many different ways. This phenomenon has also been referred to as isotelesis (Greek: ἴσος /isos/ "equal", τέλεσις /telesis/ "the intelligent direction of effort toward the achievement of an end.") when in games involving superrationality.

In business, equifinality implies that firms may establish similar competitive advantages based on substantially different competencies.

In psychology, equifinality refers to how different early experiences in life (e.g., parental divorce, physical abuse, parental substance abuse) can lead to similar outcomes (e.g., childhood depression). In other words, there are many different early experiences that can lead to the same psychological disorder.

In archaeology, equifinality refers to how different historical processes may lead to a similar outcome or social formation. For example, the development of agriculture or the bow and arrow occurred independently in many different areas of the world, yet for different reasons and through different historical trajectories. This highlights that generalizations based on cross-cultural comparisons cannot be made uncritically.

In geomorphology, the term equifinality indicates that similar landforms might arise as a result of quite different sets of processes.

In environmental modeling studies, and especially in hydrological modeling, two models are equifinal if they lead to an equally acceptable or behavioral representation of the observed natural processes. It is a key concept to assess how uncertain hydrological predictions are."
http://en.wikipedia.org/wiki/Equifinality

"Will computers become able to generate consciousness? Some people thinks so, but I am not among them. Neither do I believe that the imitation of some brain functions means that the living brain functions just like computers. In nature it is commonly possible to achieve a general result by several different means. Remember the old saying, "There is more than one way to skin a cat." In science this principle has the name "equifinality."" --Dwight J. Ingle, Is It Really So?, pg 25 http://advocacyagainstcensorship.com/quotes/qtscients3.html

"Another indicator of differences between the qualitative and quantitative traditions is the importance or lack thereof attributed to the concept of ‘‘equifinality’’ (George and Bennett 2005). Also referred to as ‘‘multiple, conjunctural causation’’ or just ‘‘multiple causation,’’ the concept of equifinality is strongly associated with the qualitative comparative analysis approach developed by Ragin (1987), and it plays a key role in how many qualitative scholars think about causal relationships. In contrast, discussions of equifinality are absent in quantitative work. If one were to read only large-N quantitative work, the word ‘‘equifinality’’ (or its synonyms) would not be part of one’s methodological vocabulary." http://www.jamesmahoney.org/articles/A%20Tale%20of%20Two%20Cultures%20Proofs.pdf

"Equifinality means that any given phenomenon may be explained in two or more ways...Ludwig von Bertalanffy seems responsible for naming this principle: "the same final state may be reached from different initial conditions and in different ways. This is what is called equifinality"...The physicist John Barrow's image is clear. "If you stir a barrel of thick oil it will rapidly settle down to the same placid state, no matter how it was first stirred"...And Richard Feynman, inventor of a way to "calculate the probability of [a quantum] event that can happen in alternative ways", says, "it is possible to start from many apparently different starting points, and yet come to the same thing" In the social sciences, Weber is a leading exponent of equifinality...Emile Durkheim has the same idea: "to arrive at the same goal, many different routes can be, and in reality are, followed."

The term equi-initiality is used here to designate the converse of equifinality; it means that more than one consequence may be produced from any given("initial") phenomenon. We speak of coming to a "turning point," a "cross-roads," or a "crisis," from which alternative consequences can be expected to flow -- depending, usually, on some ultimate infinitesimal factor ("the Butterfly Effect," "For want of a nail"). Equi-initiality is what Barrow has in mind when he argues for the "non-uniqueness of the future states of a system following the prescription of a definite starting state".The mark of the social: discovery or invention?

"In the most general sense, equifinality is the case where different conditions lead to similar effects. ... In response to the problems associated with equifinality, there have been many calls for simpler models (e.g. Beven, 1996a,b; Kirkby, 1996; Young et al., 1996). Parsimony in science has been a virtue for some time (e.g. William of Occam advocated it in the fourteenth century). However, some systems are not simple (e.g. linear) enough to justify simple models for all applications. A model cannot adequately capture processes and process interactions that are not included in its formulation. For example, black (or any colour) box models cannot provide process-based diagnostics within the box. Our perception of the best use of physics-based hydrologic-response simulation was lucidly characterized by Kirkby (1996):

Models are thought experiments which help refine our understanding of the dominant processes acting. . .While most simulation models may be used in a forecasting mode, the most important role of models is as a qualitative thought experiment, testing whether we have a sufficient and consistent theoretical explanation of physical processes. The best model can only provide a possible explanation which is more consistent with known data than its current rivals. Every field observation, and especially the more qualitative or anecdotal ones, provides an opportunity to refute, or in some cases overturn, existing models and the theories which lie behind them." http://pangea.stanford.edu/~keith/119.pdf

A Spatial View of Information

"Spatial representation has two contrasting but interacting aspects (i) representation of spaces’ and (ii) representation by spaces. In this paper we will examine two aspects that are common to both interpretations of the theme of spatial representation, namely nerve-type constructions and refinement. We consider the induced structures, which some of the attributes of the informational context are sampled.

Key words: Chu spaces, sampling, formal context, nerves, poset models,
biextensional collapse."
http://www.maths.bangor.ac.uk/research/ftp/cathom/06_09.pdf

"Cognitive-perceptual syntax consists of (1) sets, posets or tosets of attributes (telons), (2) perceptual rules of external attribution for mapping external relationships into telons, (3) cognitive rules of internal attribution for cognitive (internal, non-perceptual) state-transition, and (4) laws of dependency and conjugacy according to which perceptual or cognitive rules of external or internal attribution may or may not act in a particular order or in simultaneity."
http://koinotely.blogspot.com/2010/09/teleologic-evolution-and-intelligent.html

"The bulk of theoretical and empirical work in the neurobiology of emotion indicates that isotelesis—the principle that any one function is served by several structures and processes—applies to emotion as it applies to thermoregulation, for example (Satinoff, 1982)...In light of the preceding discussion, it is quite clear that the processes that emerge in emotion are governed not only by isotelesis, but by the principle of polytelesis as well. The first principle holds that many functions, especially the important ones, are served by a number of redundant systems, whereas the second holds that many systems serve more than one function. There are very few organic functions that are served uniquely by one and only one process, structure, or organ. Similarly, there are very few processes, structures, or organs that serve one and only one purpose. Language, too, is characterized by the isotelic and polytelic principles; there are many words for each meaning and most words have more than one meaning. The two principles apply equally to a variety of other biological, behavioral, and social phenomena. Thus, there is no contradiction between the vascular and the communicative functions of facial efference; the systems that serve these functions are both isotelic and polytelic."
http://psychology.stanford.edu/~lera/273/zajonc-psychreview-1989.pdf

"Maintained as a variable, time can be conventionally represented along a line in a diagram, simultaneously with variables that denote distances. But other variables, such as temperature, pressure, statistical aggregates, all of them as completely distinct from space-variables as is time, can be (and are) conventionally represneted also along lines in diagrams. However convenient such diagrammatic uses may be for mathematical investigation, they do not constitute either time, temperature, or potential, or pressure, or any statistical aggregate, as a dimension isotelic with the ordered dimensions of space.

Again, to cite the purely mathematical use of the diagrammatic practice of substituting the term dimension for the term variable, it is a commonplace that the aggregate of all spheres in Euclidean triple space requires four independent variables for its mathematical expression,(...)"
http://tinyurl.com/geometryoffourdimensions

"In other words, (1) where informational distinctions regarding a system X are regarded as instantiations of law, they can also be regarded as expressions conforming to syntax; and (2) the expression of differences requires a unified expressive syntax (or set of “laws”), and this syntax must distribute over the entire set of differential expressions (or “instantiations of law”). E.g., where X is a “perceptual intersect” consisting of generally recognizable objects, attributes and events, the laws of perception must ultimately be constant and distributed. Where a putative nomological difference exists for some pair of loci (A,B), reductive syntactic covariance applies due to the need for an expressive medium, and where no such difference exists for any pair of loci (A,B), syntactic covariance applies a fortiori with no need for reduction.

Syndiffeonic relations can be regarded as elements of more complex infocognitive lattices with spatial and temporal (ordinal, stratificative) dimensions. Interpreted according to CTMU duality principles, infocognitive lattices comprise logical relationships of state and syntax. Regressing up one of these lattices by unisection ultimately leads to a syntactic medium of perfect generality and homogeneity…a universal, reflexive “syntactic operator”.

In effect, syndiffeonesis is a metalogical tautology amounting to self-resolving paradox. The paradox resides in the coincidence of sameness and difference, while a type-theoretic resolution inheres in the logical and mathematical distinction between them, i.e. the stratificative dimension of an infocognitive lattice.32 Thus, reducing reality to syndiffeonesis amounts to “paradoxiforming” it. This has an advantage: a theory and/or reality built of self-resolving paradox is immunized to paradox.

So far, we know that reality is a self-contained syndiffeonic relation. We also have access to an instructive sort of diagram that we can use to illustrate some of the principles which follow. So let us see if we can learn more about the kind of self-contained syndiffeonic relation that reality is."
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf

General covariance and the foundations of general relativity: eight decades of dispute

Einstein offered the principle of general covariance as the fundamental physical priciple of his theory of general relativity, and as responsible for extending the principle of relativity to accelerated motion. This view was disputed almost immediately with the counter-claim that the principle was no relativity principle and was physically vacuous. The disagreement persists today.
http://www.pitt.edu/~jdnorton/papers/decades_re-set.pdf

"In his general theory of relativity (GR) Einstein sought to generalize the special-relativistic equivalence of inertial frames to a principle according to which all frames of reference are equivalent. He claimed to have achieved this aim through the generalcovariance of the equations of GR. There is broad consensus among philosophersof relativity that Einstein was mistaken in this. That equations can be made tolook the same in different frames certainly does not imply in general that suchframes are physically equivalent. We shall argue, however, that Einstein’s positionis tenable. The equivalence of arbitrary frames in GR should not be equated withrelativity of arbitrary motion, though. There certainly are observable differencesbetween reference frames in GR (differences in the way particles move and fieldsevolve). The core of our defense of Einstein’s position will be to argue that suchdifferences should be seen as fact-like rather than law-like in GR. By contrast, inclassical mechanics and in special relativity (SR) the differences between inertialsystems and accelerated systems have a law-like status. The fact-like character of the differences between frames in GR justifies regarding them as equivalent in the same sense as inertial frames in SR."
http://www.phys.uu.nl/igg/dieks/equivalencecopyedited.pdf

"Lecture 4: General Covariance and gauge theories

4.1 What’s special about GTR?

Since the theory’s inception, the general covariance of GTR has often been taken to be one of its substantial, defining characteristics. But equally, since Kretchmann’s (1917) response to Einstein’s original claims for general covariance, many see the requirement that a theory should be formulated generally covariantly as physically empty: (virtually?) any theory can be so formulated. Must one of these two points of view be rejected, or can they be reconciled?

'General relativity is distinguished from other dynamical field theories by its invariance under active diffeomorphisms. Any theory can be made invariant under passive diffeomorphisms. Passive diffeomorphism invariance is a property of the formulation of a dynamical theory, while active diffeomorphism invariance is a property of the dynamical theory itself.' (Gaul & Rovelli 2000)

One task for this lecture is to understand and assess this claim."
http://users.ox.ac.uk/~ball0402/teaching/handout4.pdf

""I've read that the principle of general covariance in general relativity is best understood as a gauge symmetry with respect to the diffeomorphism group".

The problem with this statement is that "gauge transformation" in a gauge theory means "a transformation of the mathematical model that does not have any measurable/observable effect". In this sense the existence of gauge transformations means that the mathematical model is redundant, there are degrees of freedom that are not observable. In general relativity a diffeomorphism represents a change of the reference frame. This is of course "observable" in the sense that observers living in different reference frames report different observations of the same event. So I'd say that this analogy is at least as misleading as it is helpful.

You can find more about this on the webpage of Ray Streater here: Diff M as a gauge group.

"the link between this and the notion of manifest covariance is not obvious to me."

"Manifest" simply means that the covariance of an equation is easy to see (for an educated human), it does not have any deeper meaning.

"Physicists have a "principle of general covariance" which basically states that physical laws (and in particular, physical quantities) can be stated in a form which is somehow coordinate-independent. The paradox is, such "coordinate-independent" quantities and equations are frequently stated in terms of (admittedly, arbitrary) coordinates!"

I'm not sure I understand what the paradox is. Let's say you sit in a train and pour yourself a cup of coffee. You report: "both the cup and the coffeepot don't move, therefore the coffee ends up in the cup". Let's say I observe this standing at a railroad crossing , I'll report "both the cup and the coffeepot move with 30 km/h to the east, therefore the coffee ends up in the cup". The principle of general covariance says that the event "the coffee ends up in the cup" needs to be a scalar, since all observers will agree on the fact. And the velocity of the cup and the coffeepot in the west-east direction needs to be a vector because all observers will disagree about it according to the relative velocities of their frames of reference.

So, in general relativity, we say that

a) every specific choice of coordinates of (a patch of) spacetime corresponds to an observer, who can observe events and tell us about his observations,

b) given these observations we can predict what every other observer will report by applying the diffeomorphism that takes one set of coordinates to the other set of coordinates to the mathematical gadgeds that represent observable entities/effects.

The principle of general covariance says that every physical quantity has to transform in a way that we don't get an inconsistency between what we predict what another observer will report and what he actually reports. If you report that the coffee ends up in the cup, and I report that the coffee ends up on you because in my reference frame the cup moves with a different velocity than the coffepot, the theory is in trouble.

More about general relativity, the principle of general covariance and the "hole argument" of Einstein can be found on the page spacetime on the nLab."
http://mathoverflow.net/questions/50633/formalising-the-principle-of-general-covariance-in-differential-geometry

"In general relativity, the hole argument is a "paradox" which much troubled Albert Einstein while developing his famous field equation.
It is interpreted by some philosophers as an argument against manifold substantialism, a doctrine that the manifold of events in spacetime are a "substance" which exists independently of the matter within it. Physicists disagree with this interpretation, and view the argument as a confusion about gauge invariance and gauge fixing instead.

Einstein's hole argument

In a usual field equation, knowing the source of the field determines the field everywhere. For example, if we are given the current and charge density and appropriate boundary conditions, Maxwell's equations determine the electric and magnetic fields. They do not determine the vector potential though, because the vector potential depends on an arbitrary choice of gauge.

Einstein noticed that if the equations of gravity are generally covariant, then the metric cannot be determined uniquely by its sources as a function of the coordinates of spacetime. The argument is obvious: consider a gravitational source, such as the sun. Then there is some gravitational field described by a metric g(r). Now perform a coordinate transformation r r' where r' is the same as r for points which are inside the sun but r' is different from r outside the sun. The coordinate description of the interior of the sun is unaffected by the transformation, but the functional form of the metric for coordinate values outside the sun is changed.

This means that one source, the sun, can be the source of many seemingly different metrics. The resolution is immediate: any two fields which only differ by a coordinate transformation are physically equivalent, just as two different vector potentials which differ by a gauge transformation are equivalent. Then all these different fields are not different at all.

There are many variations on this apparent paradox. In one version, you consider an initial value surface with some data and find the metric as a function of time. Then you perform a coordinate transformation which moves points around in the future of the initial value surface, but which doesn't affect the initial surface or any points at infinity. Then you can conclude that the generally covariant field equations don't determine the future uniquely, since this new coordinate transformed metric is an equally valid solution. So the initial value problem is unsolvable in general relativity. This is also true in electrodynamics--- since you can do a gauge transformation which will only affect the vector potential tomorrow. The resolution in both cases is to use extra conditions to fix a gauge.

Meaning of coordinate invariance

For the philosophically inclined, there is still some subtlety. If the metric components are considered the dynamical variables of General Relativity, the condition that the equations are coordinate invariant doesn't have any content by itself. All physical theories are invariant under coordinate transformations if formulated properly. It is possible to write down Maxwell's equations in any coordinate system, and predict the future in the same way.

But in order to formulate electromagnetism in an arbitrary coordinate system, one must introduce a description of the space-time geometry which is not tied down to a special coordinate system. This description is a metric tensor at every point, or a connection which defines which nearby vectors are parallel. The mathematical object introduced, the Minkowski metric, changes form from one coordinate system to another, but it isn't part of the dynamics, it doesn't obey equations of motion. No matter what happens to the electromagnetic field, it is always the same. It acts without being acted upon.

In General Relativity, every separate local quantity which is used to describe the geometry is itself a local dynamical field, with its own equation of motion. This produces severe restrictions, because the equation of motion has to be a sensible one. It must determine the future from initial conditions, it must not have runaway instabilities for small perturbations, it must define a positive definite energy for small deviations. If one takes the point of view that coordinate invariance is trivially true, the principle of coordinate invariance simply states that the metric itself is dynamical and its equation of motion does not involve a fixed background geometry.

Einstein's resolution

In 1915, Einstein realized that the hole argument makes an assumption about the nature of spacetime, it presumes that the gravitational field as a function of the coordinate labels is physically meaningful by itself. By dropping this assumption general covariance became compatible with determinism, but now the gravitational field is only physically meaningful to the extent that it alters the trajectories of material particles. While two fields that differ by a coordinate transformation look different mathematically, after the trajectories of all the particles are relabeled in the new coordinates, their interactions are manifestly unchanged. This was the first clear statement of the principle of gauge invariance in physical law.

Einstein believed that the hole argument implies that the only meaningful definition of location and time is through matter. A point in spacetime is meaningless in itself, because the label which one gives to such a point is undetermined. Spacetime points only acquire their physical significance because matter is moving through them. In his words:

"All our spacetime verifications invariably amount to a determination of spacetime coincidences. If, for example, events consisted merely in the motion of material points, then ultimately nothing would be observable but the meeting of two or more of these points." (Einstein, 1916, p.117)

He considered this the deepest insight of general relativity. When asked by reporters to summarize his theory, he said:
"People before me believed that if all the matter in the universe were removed, only space and time would exist. My theory proves that space and time would disappear along with matter."

Implications of background independence for some theories of quantum gravity

Loop quantum gravity is an approach to quantum gravity which attempts to marry the fundamental principles of classical GR with the minimal essential features of quantum mechanics and without demanding any new hypotheses. Loop quantum gravity people regard background independence as a central tenet in their approach to quantizing gravity - a classical symmetry that ought to be preserved by the quantum theory if we are to be truly quantizing geometry(=gravity). One immediate consequence is that LQG is UV-finite because small and large distances are gauge equivalent. However, it has been suggested that loop quantum gravity violates background dependence by introducing a preferred frame of reference (`spin foams').

Perturbative string theory (as well as a number of non-perturbative developments) is not `obviously' background independent, because it depends on boundary conditions at infinity, similarly to how perturbative general relativity is not `obviously' background dependent. However, it is generally believed that string theory is background dependent.

Expanded explanation of consequences of the hole argument to classical and quantum general relativity can be found at [1] "General Relativity and Loop Quantum Gravity" by Ian Baynham."
http://en.wikipedia.org/wiki/Hole_argument

"Philosophy of space and time is the branch of philosophy concerned with the issues surrounding the ontology, epistemology, and character of space and time. While such ideas have been central to philosophy from its inception, the philosophy of space and time was both an inspiration for and a central aspect of early analytic philosophy. The subject focuses on a number of basic issues, including—but not limited to—whether or not time and space exist independently of the mind, whether they exist independently of one another, what accounts for time's apparently unidirectional flow, whether times other than the present moment exist, and questions about the nature of identity (particularly the nature of identity over time)."
http://en.wikipedia.org/wiki/Philosophy_of_Space_and_Time

"The conclusion of Einstein and therefore of general relativity was however that the statement “at a certain region in time and space that contains no matter, there is (or isn’t) gravitation” is not independent of the observer. This means that the physical notions of space and time do not have the same objective meaning as in Newton’s physics.

On the other hand, the conclusion that one draws from the nPOV is a simpler and much more general one: it is evil to try to identify objects in a category in a way that does not respect their isomorphisms."
...
The noun “spacetime” is used in both special relativity and general relativity, but is best motivated from the viewpoint of general relativity. Special relativity deals with the Minkowski spacetime only. The Minkowski spacetime allows a canonical choice of global coordinates such that the metric tensor has in every point the form diag(-1, 1, 1, 1), which identifies the first coordinate as representing the time coordinate and the others as representing space coordinates.

Given a general spacetime, there is not necessarily a globally defined coordinate system, and therefore not necessarily a globally defined canonical time coordinate. More specifically, there are spacetimes that admit coordinates defined on subsets where the physical interpretation of the coordinates as modelling time and space coordinates changes over the domain of definition."
http://ncatlab.org/nlab/show/spacetime

Space-Time and Isomorphism
http://experiment.iitalia.com/librarysplit2/JSTOR%20Mundy,%20B.%20-%20Space-Time%20and%20Isomorphism.pdf

"The direct observational status of the familiar global spacetime symmetries leads us to an epistemological aspect of symmetries. According to Wigner, the spatiotemporal invariance principles play the role of a prerequisite for the very possibility of discovering the laws of nature: “if the correlations between events changed from day to day, and would be different for different points of space, it would be impossible to discover them” (Wigner, 1967, p. 29). For Wigner, this conception of symmetry principles is essentially related to our ignorance (if we could directly know all the laws of nature, we would not need to use symmetry principles in our search for them). Others, on the contrary, have arrived at a view according to which symmetry principles function as “transcendental principles” in the Kantian sense (see for instance Mainzer, 1996). It should be noted in this regard that Wigner's starting point, as quoted above, does not imply exact symmetries — all that is needed epistemologically is that the global symmetries hold approximately, for suitable spatiotemporal regions, such that there is sufficient stability and regularity in the events for the laws of nature to be discovered.

There is another reason why symmetries might be seen as being primarily epistemological. As we have mentioned, there is a close connection between the notions of symmetry and equivalence, and this leads also to a notion of irrelevance: the equivalence of space points (translational symmetry) is, for example, understood in the sense of the irrelevance of an absolute position to the physical description. There are two ways that one might interpret the epistemological significance of this: on the one hand, we might say that symmetries are associated with unavoidable redundancy in our descriptions of the world, while on the other hand we might maintain that symmetries indicate a limitation of our epistemic access — there are certain properties of objects, such as their absolute positions, that are not observable.

Finally, we would like to mention an aspect of symmetry that might very naturally be used to support either an ontological or an epistemological account. It is widely agreed that there is a close connection between symmetry and objectivity, the starting point once again being provided by spacetime symmetries: the laws by means of which we describe the evolution of physical systems have an objective validity because they are the same for all observers. The old and natural idea that what is objective should not depend upon the particular perspective under which it is taken into consideration is thus reformulated in the following group-theoretical terms: what is objective is what is invariant with respect to the transformation group of reference frames, or, quoting Hermann Weyl (1952, p. 132), “objectivity means invariance with respect to the group of automorphisms [of space-time]”.[22] Debs and Redhead (2007) label as “invariantism” the view that “invariance under a specified group of automorphisms is both a necessary and sufficient condition for objectivity” (p. 60). They point out (p. 73, and see also p. 66) that there is a natural connection between “invariantism” and structural realism."
http://plato.stanford.edu/entries/symmetry-breaking/#5

Laws, Symmetry, and Symmetry Breaking; Invariance, Conservation Principles, and Objectivityƒ

"Given its importance in modern physics, philosophers of science have paid
surprisingly little attention to the subject of symmetries and invariances, and they
have largely neglected the subtopic of symmetry breaking. I illustrate how the topic
of laws and symmetries brings into fruitful interaction technical issues in physics and mathematics with both methodological issues in philosophy of science, such as the status of laws of physics, and metaphysical issues, such as the nature of objectivity."
http://philpapers.org/rec/EARLSA-2

"In theoretical physics, general covariance (also known as diffeomorphism covariance or general invariance) is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. The essential idea is that coordinates do not exist a priori in nature, but are only artifices used in describing nature, and hence should play no role in the formulation of fundamental physical laws.
...
The relationship between general covariance and general relativity may be summarized by quoting a standard textbook:

Mathematics was not sufficiently refined in 1917 to cleave apart the demands for "no prior geometry" and for a geometric, coordinate-independent formulation of physics. Einstein described both demands by a single phrase, "general covariance." The "no prior geometry" demand actually fathered general relativity, but by doing so anonymously, disguised as "general covariance", it also fathered half a century of confusion.

A more modern interpretation of the physical content of the original principle of general covariance is that the Lie group GL4(R) is a fundamental "external" symmetry of the world. Other symmetries, including "internal" symmetries based on compact groups, now play a major role in fundamental physical theories."
http://en.wikipedia.org/wiki/General_covariance

"Since general relativity was discovered by Albert Einstein in 1915 it has been demonstrated by observation and experiment to be an accurate theory of gravitation up to cosmic scales. On small scales the laws of quantum mechanics have likewise been found to describe nature in a way consistent with every experiment performed, so far. To describe the laws of the universe fully a synthesis of general relativity and quantum mechanics must be found. Only then can physicists hope to understand the realms where both gravity and quantum come together. The big bang is one such place.

The task to find such a theory of quantum gravity is one of the major scientific endeavours of our time. Many physicists believe that string theory is the leading candidate, but string theory has so far failed to provide an adequate description of the big bang, and its success is just as incomplete in other ways. That could be because physicists do not really know what the correct underlying principles of string theory are, so they do not have the right formulation that would allow them to answer the important questions. In particular, string theory treats spacetime in quite an old fashioned way even though it indicates that spacetime must be very different at small scales from what we are familiar with.

General relativity by contrast, is a model theory based on a geometric symmetry principle from which its dynamics can be elegantly derived. The symmetry is called general covariance or diffeomorphism invariance. It says that the dynamical equations of the gravitational field and any matter must be unchanged in form under any smooth transformation of spacetime coordinates. To understand what that means you have to think of a region of spacetime as a set of events, each one labelled by unique values of four coordinate values x,y,z, and t. The first three tell us where in space the event happened, while the fourth is time and tells us when it happened. But the choice of coordinates that are used is arbitrary, so the laws of physics should not depend on what the choice is. It follows that if any smooth mathematical function is used to map one coordinate system to any other, the equations of dynamics must transform in such a way that they look the same as they did before. This symmetry principle is a strong constraint on the possible range of equations and can be used to derive the laws of gravity almost uniquely.

The principle of general covariance works on the assumption that spacetime is smooth and continuous. Although this fits in with our normal experience, there are reasons to suspect that it may not be a suitable assumption for quantum gravity. In quantum field theory, continuous fields are replaced with a more complex structure that has a dual particle-wave nature as if they can be both continuous and discrete depending on how you measure them. Research in string theory and several other approaches to quantum gravity suggest that spacetime must also have a dual continuous and discrete nature, but without the power to probe spacetime at sufficient energies it is difficult to measure its properties directly to find out how such a quantised spacetime should work.

This is where event symmetry comes in. In a discrete spacetime treated as a disordered set of events it is natural to extend the symmetry of general covariance to a discrete event symmetry in which any function mapping the set of events to itself replaces the smooth functions used in general relativity. Such a function is also called a permutation, so the principle of event symmetry states that the equations governing the laws of physics must be unchanged when transformed by any permutation of spacetime events."
http://en.wikipedia.org/wiki/Event_symmetry

"This is called The Principle of Event Symmetric Space-Time which states that The universal symmetry of the laws of physics includes a mapping onto the symmetric group acting on space-time events.

Hidden Symmetry

There are a number of reasons why this principle may seem unreasonable. For one thing it suggests that we must treat space-time as a discrete set of events. In fact there are plenty of reasons to believe in discrete space-time. Theorists working on quantum gravity in various forms agree that the Planck scale defines a minimum length beyond which the Heisenberg uncertainty principle makes measurement impossible. In addition, arguments based on black hole thermodynamics suggest that there must be a finite number of physical degrees of freedom in a region of space.

A more direct reason to doubt the principle would be that there is no visible or experimental evidence of such a symmetry. The principle suggests that the world should look the same after permutations of space-time events. It should even be possible to swap events from the past with those of the future without consequence. This does not seem to accord with experience.

To counter this it is only necessary to point out that symmetries in physics can be spontaneously broken. The most famous example of this is the Higgs Mechanism which is believed to be responsible for breaking the symmetry between the electromagnetic and weak interactions. Could space-time symmetry be hidden through an analogous mechanism of spontaneous symmetry breaking?

There are, in fact, other ways in which symmetries can be hidden without being broken. Since the symmetric group acting on space-time can be regarded as a discrete extension of the diffeomorphism group in general relativity, it is worth noting that the diffeomorphism invariance is not all that evident either. If it were then we would expect to be able to distort space-time in ways reminiscent of the most bizarre hall of mirrors without consequence. Everything around us would behave like it is made of liquid rubber. Instead we find that only a small part of the symmetry which includes rigid translations and rotations is directly observed on human scales. The rubbery nature of space-time is more noticeable on cosmological scales where space-time can be distorted in quite counterintuitive ways.

If space-time is event-symmetric then we must account for space-time topology as it is observed. Topology is becoming more and more important in fundamental physics. Theories of magnetic monopoles, for example, are heavily dependent on the topological structure of space-time. To solve this problem is the greatest challenge for the event-symmetric theory.

The Soap Film Analogy

To get a more intuitive idea of how the event-symmetry of space-time can be hidden we use an analogy. Anyone who has read popular articles on the Big Bang and the expanding universe will be familiar with the analogy in which space-time is compared to the surface of an expanding balloon. The analogy is not perfect since it suggests that curved space-time is embedded in some higher dimensional flat space, when in fact, the mathematical formulation of curvature avoids the need for such a thing. Nevertheless, the analogy is useful so long as you are aware of its limitations.

We can extend the balloon analogy by imagining that space-time events are like a discrete set of particles populating some higher dimensional space. The particles might float around like a gas of molecules interacting through some kind of forces. In any gas model with just one type of molecule the forces between any two molecules will take the same form dependent on the distance between them and their relative orientations. Such a system is therefore invariant under permutations of molecules. In other words, it has the same symmetric group invariance as that postulated in the principle of event-symmetric space-time.

Given this analogy we can use what we know about the behaviour of gases and liquids to gain a heuristic understanding of event symmetric space-time. For one thing we know that gases can condense into liquids and liquids can freeze into solids. Once frozen, the molecules stay fixed relative to their neighbours and form rigid objects. In a solid the symmetry among the forces still exists but because the molecules are held within a small place the symmetry is hidden.

Another less common form of matter gives an even better picture. If the forces between molecules are just right then a liquid can form thin films or bubbles. This is familiar to us whenever we see soap suds. A soap film takes a form very similar to the balloon which served as our analogy of space-time for the expanding universe.

More Symmetry

So far we have seen how the principle of event-symmetric space-time allows us to retain space-time symmetry in the face of topology change. Beyond that we would like to find a way to incorporate internal gauge symmetry into the picture too. It turns out that there is an easy way to embed the symmetric group into matrix groups. This is interesting because, as it happens, matrix models are already studied as simple models of string theory. String theorists do not normally interpret them as models on event-symmetric space-time but it would be reasonable to do so in the light of what has been said here.

In these models the total symmetry of the system is a group of rotation matrices in some high dimensional space. The number of dimensions corresponds to the total number of space-time events in the universe, which may be infinite. Permutations of events now corresponds to rotations in this space which swap over the axes.

So does this mean that the universal symmetry of physics is an infinite dimensional orthogonal matrix? The answer is probably no since an orthogonal matrix is too simple to account for the structure of the laws of physics. For example, orthogonal groups do not include supersymmetry which is important in superstring theories. The true universal symmetry may well be some much more elaborate structure which is not yet known to mathematicians.
...
On its own, the principle of event symmetric space-time is not very fruitful. What is needed is a mathematical model which incorporates the principle and which gives body to some of the speculative ideas outlined above.

It turns out that such a model can be constructed using Clifford algebras. These algebras are very simple in principle but have a remarkable number of applications in theoretical physics, They first appeared to physicists in Dirac's relativistic equation of the electron. They also turn out to be a useful way to represent the algebra of fermionic annihilation and creation operators.

If we regard a Clifford algebra as an algebra which can create and annihilate fermions at space-time events then we find we have defined a system which is event-symmetric. It can be regarded as an algebraic description of a quantum gas of fermions.

This is too simple to provide a good model of space-time but there is more. Clifford algebras also turn out to be important in construction of supersymmetries and if we take advantage of this observation we might be able to find a more interesting supersymmetric model."
http://www.weburbia.com/pg/esst.htm

The Principle of Event Symmetry:

To accommodate topology change, the symmetry of space-time must be extended from the diffeomorphism group of a manifold, to the symmetric group acting on the discrete set of space-time events. This is the principle of event-symmetric space-time. I investigate a number of physical toy models with this symmetry to gain some insight into the likely nature of event-symmetric space-time. In the more advanced models the symmetric group is embedded into larger structures such as matrix groups which provide scope to unify space-time symmetry with the internal gauge symmetries of particle physics. I also suggest that the symmetric group of space-time could be related to the symmetric group acting to exchange identical particles, implying a unification of space-time and matter. I end with a definition of a new type of loop symmetry which is important in event-symmetric superstring theory.
...
If the balance between the two were to favour structures of low dimension then we would have a similar mechanism of space-time formation and event-symmetry hiding as we did in the soap-film model but in this case there would be no artificial extrinsic space in which it was embedded. In fact it seems to be quite difficult to construct random graph models which dynamically generate space-time in a fashion similar to the soap-film model."
http://rxiv.org/pdf/0907.0032v1.pdf

"We prove that any 3x3 unitary matrix can be transformed to a magic matrix by multiplying its rows and columns by phase factors. A magic matrix is defined as one for which the sum of the elements in any row or column add to the same value. This result is relevant to recent observations on particle mixing matrices."
http://www.prespacetime.com/index.php/pst/article/view/58

It is worth investigating simple characterisations of ternary toposes in analogy to the axioms of a topos as motivated by the Boolean categorySet. For instance, the basic characterising pullback square, defined by the two target arrows, would be replaced by the three target faces of a cube. Unlike duality, which is described by a single pair of functors, triality requires a triple of dual processes, and hence three parity cubes. M. Rios has pointed out that these cubes are labeled by pairs of three logic symbols, say 0, 1 and 2. There are 27 elementary functions in ternary logic, corresponding to the 27 dimensions of the 3× 3 octonion Jordan algebra."
http://www.scribd.com/doc/2918313/FullProp08

M Theory Lesson 77

"Posts here often discuss operads, for which compositions are represented by a directed rooted tree and the root is the single output. Dually one can consider cooperads with multiple outputs.

"These give rise naturally to coalgebras rather than algebras. And yes, people have considered vertex operator coalgebras and so on. Recall that coalgebras are essential to the concept of Hopf algebra. An example of a Hopf algebra is a universal enveloping algebra of a semisimple Lie algebra. Deformations of these, also Hopf algebras, are the notorious quantum groups.

Now Ross Street has published a book, Quantum Groups: a Path to Current Algebra, which I would love to get my hands on. The book blurb says, "A key to understanding these new developments is categorical duality." That simply means the duality given by turning a tree upside down, as drawn, so we need operad structures that combine upward branches and downward branches. Leaping ahead to triality, one expects branchings in three directions and no chosen root. Such diagrams are associated to cyclic operads, and we actually consider these with polygonal tilings, since the dual to a polygon is a branching tree with no real root."
http://kea-monad.blogspot.com/2007_07_01_archive.html

Higher Operads, Higher Categories:
http://arxiv.org/pdf/math/

Alfred Toth: Triality, Teridentity, Tetradicity (Summary)

"The present paper starts with the proof, that the field of mathematical semiotics is isomorphic to each of the four division algebras, i.e. the real and complex numbers, the quaternions and the octonions. The physico-mathemati cal concept of triality and the semiotical concept of teridentity are introduced. Is is shown, that teridentity pre supposes a tetradic (and not a triadic) sign model, that can be re-interpreted by aid of Günthers theory of reflec tive categories and its associated founding relations. As a result, both Dynkin and Feynman diagrams representing the triality of Spin(8), that gives the octonions, can be rewritten in the form of a combined Peirce-Güntherian graph model, that is not only capable of dealing with matter and ener­gy/force, but with information as well. The resulting physico-mathematical-semi otical graph model is a polycontextural one that includes both quantitative and qualitative description of the structure of the universe."
http://www.lingviko.net/grkg/grkg0802.htm

Planck Scale Physics, Pregeometry and the Notion of Time:

"The above analysis shows that it is possible to construct a pregeometric framework so as to describe the physics around Planck scale. At the deepest level i.e. beyond Planck level, we proposed purely mathematical concepts like cate-gory of generalized time. As soon as we impose certain restriction or condition,we get physical time around or above Planck scale. This generalized time category is called "noumenon" of infinite time and imposing restriction we get "phenomenon" time or physical time. This conditioning is intimately related to defining measures and also to perception. By using different ways of conditioning we can have the notion of time at different scales or level of the physical world namely in classical mechanics, theory of relativity or in quantum domain. So the physical time seems to be emergent property due to conditioning or limitations. There is a great deal of debates specially at philosophical level about the concept of emergence. Butterfield and Isham[6] made an extensive review on this topic. In everyday language, emergence is considered to be a process in temporal sense.But in our framework at the level beyond Planck scale. In generalized time category there is no concept of "before" or "after". So "emergence" should be considered in non-temporal sense. Here, "emergence" can be thought of associated with "conditioning" or "imposing limitation in measurement". The reality beyond Planck scale should be considered as "Veiled reality" of d'Espagnat[15]. He suggested that it is necessary to speak of an independent reality which can't be described in the sense of traditional realism. One can get statistical knowledge of it. In Perennial Indian Philosophy [16] this kind of "veiling" is described by theterm "Maya" or "Illusion". In Sanskrit language the generic meaning of the word "Maya" is related to "Measure" and hence the limitation. Again the word "emergence" has also been described in a completely different way rather than western sense. It is said that "emergence" is related to the concept "expansion from within without". This is not an expansion from a small center or focus but without reference to size or limitation or area, means the development of "limitless subjectivity into as limitless objectivity". Accordingly, this expansionnot being an increase in size -for infinite extension (generalized time) admits no enlargement - is just a change of condition i.e. from pre geometry to discrete Planck scale and then to continuum space-time. Here, the expansion is traced back to its origin to a kind of primordial (and periodic) vibration which is responsible for the manifestation of the physical objects and physical world .It needs further elaboration to find the linkage between the fluctuations associated with the creation or annihilation of nodes or creation or annihilation of black holes as discussed in our above cellular network model. This fluctuation might be responsible for the fuzzy character of the lumps or foamy space-time as speculated by Wheeler [9]."
http://www.chronos.msu.ru/EREPORTS/roy_planck.pdf

Multiboundary Algebra as Pregeometry

"It is well known that the Clifford Algebras, and their quaternionic and octonionic subalgebras, are structures of fundamental importance in modern physics. Geoffrey Dixon has even used them as the centerpiece of a novel approach to Grand Unification. In the spirit of Wheeler’s notion of ”pregeometry” and more recent work on quantum set theory, the goal of the present investigation is to explore how
these algebras may be seen to emerge from a simpler and more primitive order. In order to observe this emergence in the most natural way, a pregeometric domain is proposed that consists of two different kinds of boundaries, each imposing different properties on the combinatory operations occurring between elements they contain. It is shown that a very simple variant of this kind of ”multiboundary algebra” gives
rise to Clifford Algebra, in much the same way as Spencer-Brown’s simpler single-boundary algebra gives rise to Boolean algebra."
http://www.ejtp.com/articles/ejtpv4i16IIIp1.pdf

"Lee Smolin's "The Future of Spin Networks" is a testament to Roger Penrose's mathematical genius in the modern theoretical physics of quantum gravity, topological field theory, and conformal field theory. The spin-off from his twistor theory and his early attempts at quantum geometry is significant. Penrose's original spin networks for SU(2) have been extended to any Lie group G even to categories
and most importantly to the Hopf algebras of the deformed "quantum groups". If Penrose's mathematical intuition is so good, how can we doubt his physical intuition that consciousness and quantum gravity are really 2 aspects of the same problem?"
http://www.stealthskater.com/Documents/Consciousness_11.pdf

"Also changed in the last two decades, as mentioned above, is much greater familiarity among theoretical physicists with algebraic and categorical methods. That is to say, there has also been a giant leap in the available mathematical technology. I recall being told at a 1988 reception in MIT (by a very famous physicist) that my just finished PhD thesis must be nonsense because without a continuum spacetime and a Lagrangian it would be impossible to maintain conservation of energy between an atom and, say, a baseball bat. By now it’s more accepted that most likely spacetime after all is not a continuum and there is no actual Lagrangian, that things are more algebraic. To be fair many physicists over the decades have questioned that assumption but in the 1980s the received wisdom was ‘we do not have a coherent alternative’. We now have at least one fairly well-developed alternative technology in the form of noncommutative geometry, the quantum groups approach to which we will cover in [23]. Quantum groups also led directly into braided categories, 3-manifold invariants and ultimately into combinatorial methods including spin-networks on the one hand and linear logic on the other. Both noncommutative geometry and generalised logic tie up with other preexisting aspects of quantum gravity such as causal sets and quantum topology. So we start to have a kind of convergence to a new set of available tools.

What is relevant to the present article is that we therefore have just at the moment two of the three key ingredients for a revolution in physics: rich streams of relevant data coming on line and new mathematical tools. If one asks what made Einstein’s general relativity possible, it was perhaps not only cutting-edge experimental possibilities and the new mathematics of Riemann geometry whether properly acknowledged or not, but also their conjunction with deep philosophical insight which he attributed in its origins to Mach, but which was probably also tied up with wider developments of the era. As should be clear from the above, if relative realism provides some of the correct philosophy then it may ultimately be part of a perhaps more general cultural shift or ‘new renaissance’ involving pure mathematics, philosophy, art, humanity, but as for the Italian Renaissance, driven in fact by advances in technology and scientific fact. I do not know if this larger vision will be realised, our main concern in this article will be how to get actual equations and ideas for quantum gravity out of this philosophy.

2. First steps: Models with observer-observed duality

Rather than justifying the above in general terms, I think it more informative to illustrate my ideas by outlining my concrete alternative to the realist-reductionist assumption. Even if one does not agree with my conclusions, the following is a nice example of how philosophy[12] can lead to concrete scientific output[13, 14]. Specifically, the search for cosmological models exhibiting what I called observer obvserved duality or ‘quantum Mach principle’ led to what remains today one of the two main classes of quantum groups.
...
What comes as the next even more general self-dual category of objects? Undoubtedly something, but note that the last one considered is already speaking about categories of categories. One cannot get too much more general than this without running out of mathematics itself. In that sense physics is getting so abstract, our understanding of it so general, that we are nearing the ‘end of physics as we currently know it. This explains from our point of view why quantum gravity already forces us to the edge of metaphysics and forces us to face the really ‘big’ questions. Of course, if we ever found quantum gravity I would not expect physics to actually end, rather that questions currently in metaphysics would become part of a new physics. However, from a structural point of view by the time we are dealing with categories of categories it is clear that the considerations will be very general. I would expect them to include a physical approach to set theory [18] including language and provability issues raised by Goedel and discussed notably by Penrose."
http://philpapers.org/rec/MAJAAT

Foundations of Quantum Group Theory (Shahn Majid):
http://tinyurl.com/foundationsofquantumgrouptheor

"(roughly speaking) physics polarises down the middle into two parts, one which represents the other, but that the latter equally represents the former, i.e. the two should be treated on an equal footing. The starting point is that Nature after all does not know or care what mathematics is already in textbooks. Therefore the quest for the ultimate theory may well entail, probably does entail, inventing entirely new mathematics in the process. In other words, at least at some intuitive level, a theoretical physicist also has to be a pure mathematician. Then one can phrase the question `what is the ultimate theory of physics ?' in the form `in the tableau of all mathematical concepts past present and future, is there some constrained surface or subset which is called physics ?' Is there an equation for physics itself as a subset of mathematics? I believe there is and if it were to be found it would be called the ultimate theory of physics. Moreover, I believe that it can be found and that it has a lot to do with what is different about the way a physicist looks at the world compared to a mathematician...We can then try to elevate the idea to a more general principle of representation-theoretic self-duality, that a fundamental theory of physics is incomplete unless such a role-reversal is possible. We can go further and hope to fully determine the (supposed) structure of fundamental laws of nature among all mathematical structures by this self-duality condition. Such duality considerations are certainly evident in some form in the context of quantum theory and gravity. The situation is summarised to the left in the following diagram. For example, Lie groups provide the simplest examples of Riemannian geometry, while the representations of similar Lie groups provide the quantum numbers of elementary particles in quantum theory. Thus, both quantum theory and non-Euclidean geometry are needed for a self-dual picture. Hopf algebras (quantum groups) precisely serve to unify these mutually dual structures."
http://planetmath.org/encyclopedia/IHESOnTheFusionOfMathematicsAndTheoreticalPhysics2.html

A Self-Explaining Universe?
http://mccabism.blogspot.com/2008/04/self-explaining-universe.html

The Duality of the Universe:
http://philsci-archive.pitt.edu/4039/1/Dualiverse.pdf

Algebraic Self-Duality as the "Ultimate Explanation"

"Shahn Majid's philosophy of physics is critically presented. In his view the postulate that the universe should be self-explaining implies that no fundamental theory of physics is complete unless it is self-dual. Majid shows that bicrossproduct Hopf algebras have this property. His philosophy is compared with other approaches to the ultimate explanation and briefly analyzed.
...
As we have seen, in his philosophical comments Majid refers to the Mach’s principle. One of its maximalistic formulations postulates that all local properties of the universe should be completely determined by its global properties (Demaret et al., 1997). Unfortunately, in spite of many claims to the opposite, general relativity
and its known modifications fail to satisfy this postulate (one of the reasons being that in spaces most often used in physics local geometric properties are, at least in part, determined by the tangent space independently of the global structure). It is worthwhile to notice that noncommutative spaces are, in general, nonlocal (the
concepts such as those of a point or its neighborhood have, in principle, no meaning in them), and if we assume – along the line of Majid’s suggestions – that the fundamental level of physics is modeled by such a space, then the possibility opens to have a “fully Machian” model in which local properties are completely engulfed by the global structure."
http://www.springerlink.com/content/pv74037m67326r7q/

A very general statement of Mach's principle is "Local physical laws are determined by the large-scale structure of the universe."
http://en.wikipedia.org/wiki/Mach's_principle

"The view of physics that is most generally accepted at the moment is that one can divide the discussion of the universe into two parts. First, there is the question of the local laws satisfied by the various physical fields. These are usually expressed in the form of differential equations. Secondly, there is the problem of boundary conditions for these equations, and the global nature of their solutions. This involves thinking about the edge of space-time in some sense. These two parts may not be independent. Indeed it has been held that local laws are determined by the large-scale structure of the universe. This view is generally connected with the name of Mach, and has more recently been developed by Dirac (1938), Sciama (1953), Dicke (1964), Hoyle and Narlikar (1964), and others."
http://tinyurl.com/largescalestructureofunivers

"What if Physical Reality is no different, created by the rules of looking at the world as a Scientist? In other words, just maybe, as we search for the ultimate theory of physics we are in fact rediscovering our own assumptions in being Scientists, the Scientific Method?

To explain why I think so, we need to think about the nature of representation. Imagine a bunch of artists gathered around a scene X, each drawing their own representation of it from their angle, style and ethos. Any one bit x of the scene is represented by the artist f of the collection as maybe a fleck of paint on their canvas. Now, the amazing thing — and this is possibly the deepest thing I know in all of physics and mathematics — is that one could equally well view the above another way in which the ‘real thing’ is not the scene X, which might after all be just a useless bowl of fruit, but the collection, X* of artists. So it is not bits x of X being represented but rather it is the artists f in X*. Each point x of the fruit bowl can be viewed as a representation of X* in which the artist f is represented by the same fleck of paint as we had before. By looking at how different artists treat a single point x of the fruit bowl we can ‘map out’ the structure of the collection X*.

What this is is a deep duality between observer and observed which is built into the nature of representation. Whenever any mathematical object f represents some structure X we can equally well view an element x of X as representing f as an element of some other structure X*. The same numbers f(x) are written now as x(f). In mathematics we say that X and X* are dually paired. Importantly one is not more ‘real’ than the other.

So within mathematics we have this deep observer-observed or measurer-measured duality symmetry. We can reverse roles. But is the reversed theory equivalent to the original? In general no; the bowl of fruit scene is not equivalent to the collection of artists. But in physics perhaps yes, and perhaps such a requirement, which I see as coming out of the scientific method, is a key missing ingredient in our theoretical understanding."
http://www.cambridgeblog.org/2008/12/reflections-on-a-self-representing-universe/

"So far in these posts we have looked at testable quantum gravity effects, but I have not said much about the ultimate theory of quantum gravity itself. There is a simple reason: I do not think we have a compelling theory yet.

Rather, I think that this deepest and most long-standing of all problems in fundamental physics still needs a revolutionary new idea or two for which we are still grasping. More revolutionary even than time-reversal. Far more revolutionary and imaginative than string theory. In this post I’ll take a personal shot at an idea — a new kind of duality principle that I think might ultimately relate gravity and information.

The idea that gravity and information should be intimately related, or if you like that information is not just a semantic construct but has a physical aspect is not new. It goes back at least some decades to works of Beckenstein and Hawking showing that black holes radiate energy due to quantum effects. As a result it was found that black holes could be viewed has having a certain entropy, proportional to the surface area of the black hole. Lets add to this a `black-holes no-hair theorem’ which says that black holes, within General Relativity, have no structure other than their total mass, spin and charge (in fact, surprisingly like an elementary particle).

What this means is that when something falls into a black hole all of its finer information content is lost forever. This is actually a bit misleading because in our perception of time far from the black hole the object never actually falls in but hovers forever at the edge (the event horizon) of the black hole. But lets gloss over that technicality. Simply put, then, black holes gobble up information and turn it into raw mass, spin and charge. This in turn suggests a kind of interplay between mass or gravity, and information. These are classical gravity or quantum particle arguments, not quantum gravity, but a true theory of quantum gravity should surely explain all this much more deeply."
http://www.cambridgeblog.org/2008/12/aristotle-may-provide-the-key-to-quantum-gravity/

"Metrical Layering

In a conspansive universe, the spacetime metric undergoes constant rescaling. Whereas Einstein required a generalization of Cartesian space embodying higher-order geometric properties like spacetime curvature, conspansion requires a yet higher order of generalization in which even relativistic properties, e.g. spacetime curvature inhering in the gravitational field, can be progressively rescaled. Where physical fields of force control or program dynamical geometry, and programming is logically stratified as in NeST, fields become layered stacks of parallel distributive programming that decompose into field strata (conspansive layers) related by an intrinsic requantization function inhering in, and logically inherited from, the most primitive and connective layer of the stack. This "storage process" by which infocognitive spacetime records its logical history is called metrical layering (note that since storage is effected by inner-expansive domains which are internally atemporal, this is to some extent a misnomer reflecting weaknesses in standard models of computation).

The metrical layering concept does not involve complicated reasoning. It suffices to note that distributed (as in “event images are outwardly distributed in layers of parallel computation by inner expansion”) effectively means “of 0 intrinsic diameter” with respect to the distributed attribute. If an attribute corresponding to a logical relation of any order is distributed over a mathematical or physical domain, then interior points of the domain are undifferentiated with respect to it, and it need not be transmitted among them. Where space and time exist only with respect to logical distinctions among attributes, metrical differentiation can occur within inner-expansive domains (IEDs) only upon the introduction of consequent attributes relative to which position is redefined in an overlying metrical layer, and what we usually call “the metric” is a function of the total relationship among all layers.

The spacetime metric thus amounts to a Venn-diagrammatic conspansive history in which every conspansive domain (lightcone cross section, Venn sphere) has virtual 0 diameter with respect to distributed attributes, despite apparent nonzero diameter with respect to metrical relations among subsequent events. What appears to be nonlocal transmission of information can thus seem to occur. Nevertheless, the CTMU is a localistic theory in every sense of the word; information is never exchanged “faster than conspansion”, i.e. faster than light (the CTMU’s unique explanation of quantum nonlocality within a localistic model is what entitles it to call itself a consistent “extension” of relativity theory, to which the locality principle is fundamental).

Metrical layering lets neo-Cartesian spacetime interface with predicate logic in such a way that in addition to the set of “localistic” spacetime intervals riding atop the stack (and subject to relativistic variation in space and time measurements), there exists an underlying predicate logic of spatiotemporal contents obeying a different kind of metric. Spacetime thus becomes a logical construct reflecting the logical evolution of that which it models, thereby extending the Lorentz-Minkowski-Einstein generalization of Cartesian space. Graphically, the CTMU places a logical, stratified computational construction on spacetime, implants a conspansive requantization function in its deepest, most distributive layer of logic (or highest, most parallel level of computation), and rotates the spacetime diagram depicting the dynamical history of the universe by 90° along the space axes. Thus, one perceives the model’s evolution as a conspansive overlay of physically-parametrized Venn diagrams directly through the time (SCSPL grammar) axis rather than through an extraneous z axis artificially separating theorist from diagram. The cognition of the modeler – his or her perceptual internalization of the model – is thereby identified with cosmic time, and infocognitive closure occurs as the model absorbs the modeler in the act of absorbing the model.

To make things even simpler: the CTMU equates reality to logic, logic to mind, and (by transitivity of equality) reality to mind. Then it makes a big Venn diagram out of all three, assigns appropriate logical and mathematical functions to the diagram, and deduces implications in light of empirical data. A little reflection reveals that it would be hard to imagine a simpler or more logical theory of reality."
http://www.megafoundation.org/CTMU/Articles/Supernova.html

Physicists Use Soap Bubbles to Study Black Holes:

“Evidence for a membrane-like behavior of black holes has been known for two decades, in work pioneered by Kip Thorne and his colleagues,” said Cardoso, who came to UM last fall. “This membrane paradigm approach makes calculations easier.”
http://www.physorg.com/news66408910.html

Isoperimetric Inequalities for Multivarifolds:

In the paper [1], we have developed the theory of multivarifolds to formulate and solve the classical multidimensional Plateau problem (i.e. the Plateau problem in a homotopy class of multidimensional films). The present paper is to announce a further application of the multivarifold theory to another important question: the isoperimetric inequalities for parametrized films.
http://www.math.ac.vn/publications/acta/pdf/198101088.pdf

Minimal Surfaces, Stratified Multivarifolds, and the Plateau Problem:

Dao Trong Thi formulated the new concept of a multivarifold, which is the functional analog of a geometrical stratified surface and enables us to solve Plateau's problem in a homotopy class.
http://books.google.com/books?id=mncIV2c5Z4sC&source=gbs_navlinks_s

The remaining chapters present a detailed exposition of one of these trends (the homotopic version of Plateau's problem in terms of stratified multivarifolds) and the Plateau problem in homogeneous symplectic spaces.
http://www.ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-84

"The isoperimetric inequality is a geometric inequality involving the square of the circumference of a closed curve in the plane and the area of a plane region it encloses, as well as its various generalizations. Isoperimetric literally means "having the same perimeter". The isoperimetric problem is to determine a plane figure of the largest possible area whose boundary has a specified length. A closely related Dido's problem asks for a region of the maximal area bounded by a straight line and a curvilinear arc whose endpoints belong to that line. It is named after Dido, the legendary founder and first queen of Carthage. The solution to isoperimetric problem is given by a circle and was known already in Ancient Greece. However, the first mathematically rigorous proof of this fact was obtained only in the 19th century. Since then, many other proofs have been found, some of them stunningly simple. The isoperimetric problem has been extended in multiple ways, for example, to curves on surfaces and to regions in higher-dimensional spaces.

Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere.
...
The classical isoperimetric problem dates back to antiquity. The problem can be stated as follows: Among all closed curves in the plane of fixed perimeter, which curve (if any) maximizes the area of its enclosed region? This question can be shown to be equivalent to the following problem: Among all closed curves in the plane enclosing a fixed area, which curve (if any) minimizes the perimeter?

This problem is conceptually related to the principle of least action in physics, in that it can be restated: what is the principle of action which encloses the greatest area, with the greatest economy of effort? The 15th-century philosopher and scientist, Cardinal Nicholas of Cusa, considered rotational action, the process by which a circle is generated, to be the most direct reflection, in the realm of sensory impressions, of the process by which the universe is created. German astronomer and astrologer Johannes Kepler invoked the isoperimetric principle in discussing the morphology of the solar system, in Mysterium Cosmographicum (The Sacred Mystery of the Cosmos, 1596).
http://en.wikipedia.org/wiki/Isoperimetric_inequality

"For Maupertuis, however, it was important to retain the concept of the hard body. And the beauty of his principle of least action was that it applied just as well to hard and elastic bodies. Since he had shown that the principle also applied to systems of bodies at rest and to light, it seemed that it was truly universal.

The final stage of his argument came when Maupertuis set out to interpret his principle in cosmological terms. ‘Least action’ sounds like an economy principle, roughly equivalent to the idea of economy of effort in daily life. A universal principle of economy of effort would seem to display the working of wisdom in the very construction of the universe. This seems, in Maupertuis’s view, a more powerful argument for the existence of an infinitely wise creator than any other that can be advanced.

He published his thinking on these matters in his Essai de cosmologie (Essay on cosmology) of 1750. He shows that the major arguments advanced to prove God, from the wonders of nature or the apparent regularity of the universe, are all open to objection (what wonder is there in the existence of certain particularly repulsive insects, what regularity is there in the observation that all the planets turn in nearly the same plane – exactly the same plane might have been striking but 'nearly the same plane' is far less convincing). But a universal principle of wisdom provides an undeniable proof of the shaping of the universe by a wise creator.

Hence the principle of least action is not just the culmination of Maupertuis’s work in several areas of physics, he sees it as his most important achievement in philosophy too, giving an incontrovertible proof of God."
http://en.wikipedia.org/wiki/Pierre_Louis_Maupertuis#Least_Action_Principle

From Soap Bubbles to String Theory:
http://www.ceefcares.org/From_Soap_Bubbles_to_String_Theory-Yoni_Kahn.ppt

Isoperimetric and Isoparametric Problems

Traditional isoperimetric problems compare different plane regions having equal perimeters and ask for the region of maximal area. This paper treats a different type of problem: find incongruent plane regions that have equal perimeters and equal areas. Hence the new name: isoparametric problem.
http://www.mamikon.com/USArticles/IsoPArametric.pdf

Isoparametric Manifold:
http://en.wikipedia.org/wiki/Isoparametric_manifold

Parametric Equation:
http://en.wikipedia.org/wiki/Parametric_equation

"Parametrization (or parameterization; parameterisation, parametrisation in British English) is the process of deciding and defining the parameters necessary for a complete or relevant specification of a model or geometric object.

Sometimes, this may only involve identifying certain parameters or variables. If, for example, the model is of a wind turbine with a particular interest in the efficiency of power generation, then the parameters of interest will probably include the number, length and pitch of the blades.

Most often, parametrization is a mathematical process involving the identification of a complete set of effective coordinates or degrees of freedom of the system, process or model, without regard to their utility in some design. Parametrization of a line, surface or volume, for example, implies identification of a set of coordinates that allows one to uniquely identify any point (on the line, surface, or volume) with an ordered list of numbers. Each of the coordinates can be defined parametrically in the form of a parametric curve (one-dimensional) or a parametric equation (2+ dimensions).
...
As indicated above, there is arbitrariness in the choice of parameters of a given model, geometric object, etc. Often, one wishes to determine intrinsic properties of an object that do not depend on this arbitrariness, which are therefore independent of any particular choice of parameters. This is particularly the case in physics, wherein parametrization invariance (or 're-parametrization invariance') is a guiding principle in the search for physically-acceptable theories (particularly in general relativity).

For example, whilst the location of a fixed point on some curved line may be given by different numbers depending on how the line is parametrized, the length of the line between two such fixed points will be independent of the choice of parametrization, even though it might have been computed using other coordinate systems.

Parametrization invariance implies that either the dimensionality or the volume of the parameter space is larger than that which is necessary to describe the physics in question (as exemplified in scale invariance)."
http://en.wikipedia.org/wiki/Parametrization

"In mathematics, statistics, and the mathematical sciences, a parameter (G: auxiliary measure) is a quantity that serves to relate functions and variables using a common variable (often t) when such a relationship would be difficult to explicate with an equation.
...
In logic, the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz, "Natural Deduction"; Paulson, "Designing a theorem prover"). Parameters locally defined within the predicate are called variables. This extra distinction pays off when defining substitution (without this distinction special provision has to be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables, and when defining substitution have to distinguish between free variables and bound variables.

In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage isn't consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel.

"Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal."
http://en.wikipedia.org/wiki/Parameter

"These two terms are sometimes loosely used interchangeably; in particular, "argument" is sometimes used in place of "parameter". Nevertheless, there is a difference. Properly, parameters appear in procedure definitions; arguments appear in procedure calls.

A parameter is an intrinsic property of the procedure, included in its definition. For example, in many languages, a minimal procedure to add two supplied integers together and calculate the sum total would need two parameters, one for each expected integer. In general, a procedure may be defined with any number of parameters, or no parameters at all. If a procedure has parameters, the part of its definition that specifies the parameters is called its parameter list.

By contrast, the arguments are the values actually supplied to the procedure when it is called. Unlike the parameters, which form an unchanging part of the procedure's definition, the arguments can, and often do, vary from call to call. Each time a procedure is called, the part of the procedure call that specifies the arguments is called the argument list."
http://en.wikipedia.org/wiki/Parameter_(computer_science)

Distance to Bifurcation in Multidimensional Parameter Space: Margin Sensitivity and Closest Bifurcations

The problem of operating or designing a system with robust stability with respect to many parameters can be viewed as a geometric problem in multidimensional parameter space of finding the position of nominal parameters 0 relative to hypersurfaces at which stability is lost in a bifurcation. The position of 0 relative to these hypersurfaces may be quantified by numerically computing the bifurcations in various directions in parameter space and the bifurcations closest to 0. The sensitivity of the distances to these bifurcations yield hyperplane approximations to the hypersurfaces and optimal changes in parameters to improve the stability robustness. Methods for saddle-node, Hopf, transcritical, pitchfork, cusp, and isola bifurcation instabilities and constraints are outlined. These methods take full account of system nonlinearity and are practical in high dimensional parameter spaces. Applications to the design and operation of electric power systems, satellites, hydraulics and chemical process control are summarized.
http://eceserv0.ece.wisc.edu/~dobson/PAPERS/dobsonCBchapter03.pdf

"This paper describes modelling tools for formal systems design in the fields of information and physical systems. The concept and method of incursion and hyperincursion are first applied to the fractal machine, an hyperincursive cellular automata with sequential computations with exclusive or where time plays a central role. Simulations show the generation of fractal patterns. The computation is incursive, for inclusive recursion, in the sense that an automaton is computed at future time t + 1 as a function of its neighbouring automata at the present and/or past time steps but also at future time t + 1. The hyperincursion is an incursion when several values can be generated for each time step. External incursive inputs cannot be transformed to recursion. This is really a practical example of the final cause of Aristotle. Internal incursive inputs defined at the future time can be transformed to recursive inputs by self-reference defining then a self-referential system. A particular case of self-reference with the fractal machine shows a non deterministic hyperincursive field. The concepts of incursion and hyperincursion can be related to the theory of hypersets where a set includes itself. Secondly, the incursion is applied to generate fractals with different scaling symmetries. This is used to generate the same fractal at different scales like the box counting method for computing a fractal dimension. The simulation of fractals with an initial condition given by pictures is shown to be a process similar to a hologram. Interference of the pictures with some symmetry gives rise to complex patterns. This method is also used to generate fractal interlacing. Thirdly, it is shown that fractals can also be generated from digital diffusion and wave equations, that is to say from the modulo N of their finite difference equations with integer coefficients.

Keywords: Computer Simulation, Fractals, Information Systems, Mathematics, Models, Biological, Philosophy, Physics"
http://pubget.com/paper/9231908

"This entry is about two kinds of circularity: object circularity, where an object is taken to be part of itself in some sense; and definition circularity, where a collection is defined in terms of itself. Instances of these two kinds of circularity are sometimes problematic, and sometimes not. We are primarily interested in object circularity in this entry, especially instances which look problematic when one tries to model them in set theory. But we shall also discuss circular definitions.

The term non-wellfounded set refers to sets which contain themselves as members, and more generally which are part of an infinite sequence of sets each term of which is an element of the preceding set. So they exhibit object circularity in a blatant way. Discussion of such sets is very old in the history of set theory, but non-wellfounded sets are ruled out of Zermelo-Fraenkel set theory (the standard theory) due to the Foundation Axiom (FA). As it happens, there are alternatives to this axiom FA. This entry is especially concerned with one of them, an axiom first formulated by Marco Forti and Furio Honsell in a 1983 paper. It is now standard to call this principle the Anti-Foundation Axiom (AFA), following its treatment in an influential book written by Peter Aczel in 1988.

The attraction of using AFA is that it gives a set of tools for modeling circular phenomena of various sorts. These tools are connected to important circular definitions, as we shall see. We shall also be concerned with situating both the mathematics and the underlying intuitions in a broader picture, one derived from work in coalgebra. Incorporating concepts and results from category theory, coalgebra leads us to concepts such as corecursion and coinduction; these are in a sense duals to the more standard notions of recursion and induction.

The topic of this entry also has connections to work in game theory (the universal Harsanyi type spaces), semantics (especially situation-theoretic accounts, or others where a “world” is allowed to be part of itself), fractals sets and other self-similar sets, the analysis of recursion, category theory, and the philosophical side of set theory."
http://plato.stanford.edu/entries/nonwellfounded-set-theory/index.html

"So the iterative method is giving us the terms of the iteration sequence beginning with [0,1]. Finally, the iteration sequence beginning with [0,1] is a shrinking sequence of sets, and then by the way limits are calculated in X, the limit is exactly the intersection of the shrinking sequence."
http://plato.stanford.edu/entries/nonwellfounded-set-theory/modeling-circularity.html

"Abstract Data Types, Abstract State Machines, Coalgebra, Grammars, Graphs, Hypersets, Manifesto, Modal Logic, Natural Language Semantics, Quantum Logic, Recursion/Corecursion, Situation Theory"
http://math.indiana.edu/home/moss/aarticles.htm

Parametric Corecursion

This paper gives a treatment of substitution for "parametric" objects in final coalgebras, and also presents principles of definition by corecursion for such objects. The substitution results are coalgebraic versions of well-known consequences of initiality, and the work on corecursion is a general formulation which allows one to specify elements of final coalgebras using systems of equations. One source of our results is the theory of hypersets, and at the end of this paper we sketch a development of that theory which calls upon the general work of this paper to a very large extent and particular facts of elementary set theory to a much smaller extent.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.7061

FORMAL FRAMEWORKS FOR CIRCULAR PHENOMENA
Possibilities of Modeling Pathological Expressions in Formal and Natural Languages
...
In the next section, we will consider general systems of equations and a
very important concept, namely the concept of corecursive substitution. This
principle is constitutive for the whole theory of hypersets.
http://deposit.ddb.de/cgi-bin/dokserv?idn=964198576&dok_var=d1&dok_ext=pdf&filename=964198576.pdf

Fixed Points in Computer Science:
http://www.brics.dk/NS/02/2/BRICS-NS-02-2.pdf

One hundred years of Russell's paradox: mathematics, logic, philosophy:
http://tinyurl.com/100yearsofRusselsparadox

Formal Semantics of Acknowledgements, Agreements and Disagreements:

Acknowledgements, agreements, and disagreements are basic moves in communications among agents, since the moves form and revise shared information among the agents which is the basic prerequisite of group-actions. This paper investigates formal semantics of the moves from the point of view of information sharing among agents, exploiting the circular objects assured by Hyperset Theory. Therefore avoiding definitions of shared information by infinite conjunctions of propositions with nested epistemic modalities, the actions are all interpreted as one-step (not infinite many step) formations of shared information by corecursive definitions. As a result, we can provide a structure of inference between the actions, and define a process equivalence of dialogues with respect to their resulting shared information.
http://tinyurl.com/Approaches-to-intelligent-agen

"The CTMU seeks to answer such existential questions as “What came before the beginning?”, “Into what meta-space is our universe expanding?”, and “Our universe seems to be impossibly improbable. How did our life-supporting continuum manage to come into being in the face of impossible odds?” The CTMU postulates that,

(A) Our Universe Only Appears to Be Expanding
Our universe, instead of expanding, is shrinking uniformly in such a way that from the inside, it appears to be expanding. (This sidesteps the question. “If our universe is expanding, into what “meta-space” is it expanding?”)

(B) An Unbounded Sea of Possibilities
Our universe has evolved from a seed in an initial, “informational sea” of unbounded possibilities to its current state through a “filtration process” in which various possibilities and classes of possibilities have been eliminated. A minute subset of possibilities is eliminated every time a quantum wave function “collapses” to produce a single outcome. Before the “collapse” of the yy* probability function occurs, an entire envelope of possibilities exists; but once a measurement is made to determine the outcome, the yy* distribution collapses to a single, actual outcome. In other words, before the wave function collapses, there exists a seemingly infinite (or at least very large) number of outcomes that is consistent with that quantum-mechanical wave function. After the quantum-mechanical system is forced to “make a decision”, those seemingly infinite potentialities are eliminated. This is just like what happens to us when we have to choose among a number of possible courses of action.

Applied to the universe as a whole, this constitutes an infinitesimal reduction in the set of potential ways the universe can branch.

(C) The Telic Principle
One of the problems in cosmogony is the problem of the extreme improbability of finding a universe suitable for the evolution of intelligent life. One proposed answer is the Anthropic Principle. The Anthropic Principle assumes that an enormous number of parallel universes exist that differ minutely in the values of their various physical constants or initial conditions. Virtually all of these parallel universes are devoid of intelligent life, but we’re in one that is the incomparably rare exception. But we’re also in one of the only universes that harbors life that can ask such questions.

The CTMU posits instead a “Telic Principle”. The Telic Principle postulates that intelligence in our universe can retroactively influence quantum-mechanical outcomes at the myriad times that a wave-function collapses to a single outcome. This intelligence influences quantum-mechanical condensations in ways that are calculated to generate our universe. (Talk about “the fall of every sparrow”!) In other words, our universe, acting as its own architect, creates itself.

(D) Instant Communication of Quantum-Mechanical State Information
Once a quantum-mechanical wave function condenses to a single outcome (makes a decision), news of that outcome spreads at the speed of light. However, within that spreading sphere, quantum-mechanical wave functions can instantaneously exchange wave function information at great distances. "
http://www.iscid.org/boards/ubb-get_topic-f-6-t-000397-p-8.html

"The ranks of reality theorists include researchers from almost every scientific discipline. As the physical sciences have become more invested in a quantum mechanical view of reality, and as science in general has become more enamored of and dependent on computer simulation as an experimental tool, the traditional continuum model of classical physics has gradually lost ground to a new class of models to which the concepts of information and computation are essential. Called “discrete models”, they depict reality in terms of bits, quanta, quantum events, computational operations and other discrete, recursively-related units. Whereas continuum models are based on the notion of a continuum, a unified extensible whole with one or more distance parameters that can be infinitely subdivided in such a way that any two distinct points are separated by an infinite number of intermediate points, discrete models are distinguished by realistic acknowledgement of the fact that it is impossible to describe or define a change or separation in any way that does not involve a sudden finite jump in some parameter.

Unfortunately, the advantages of discrete models, which are receiving increasingly serious consideration from the scientific and philosophical communities, are outweighed by certain basic deficiencies. Not only do they exhibit scaling and nonlocality problems associated with their “display hardware”, but they are inadequate by themselves to generate the conceptual infrastructure required to explain the medium, device or array in which they evolve, or their initial states and state-transition programming. Moreover, they remain anchored in materialism, objectivism and Cartesian dualism, each of which has proven obstructive to the development of a comprehensive explanation of reality. Materialism arbitrarily excludes the possibility that reality has a meaningful nonmaterial aspect, objectivism arbitrarily excludes the possibility that reality has a meaningful subjective aspect, and although Cartesian dualism technically excludes neither, it arbitrarily denies that the mental and material, or subjective and objective, sides of reality share common substance.([5]In fact, they are identical up to an isomorphism beyond which the mental side, being the more comprehensive of the two, outstrips the material.)
...
The Principle of Attributive (Topological-Descriptive, State-Syntax) Duality

Where points belong to sets and lines are relations between points, a form of duality also holds between sets and relations or attributes, and thus between set theory and logic. Where sets contain their elements and attributes distributively describe their arguments, this implies a dual relationship between topological containment and descriptive attribution as modeled through Venn diagrams. Essentially, any containment relationship can be interpreted in two ways: in terms of position with respect to bounding lines or surfaces or hypersurfaces, as in point set topology and its geometric refinements (⊃T), or in terms of descriptive distribution relationships, as in the Venn-diagrammatic grammar of logical substitution (⊃D).
...
If determinacy corresponds to an arrow of causation pointing to an event from a surrounding medium, then indeterminacy corresponds to no arrow at all (acausality), and self-determinacy to a looping arrow or complex of arrows involving some kind of feedback. But cybernetic feedback, which involves information passed among controllers and regulated entities through a conductive or transmissive medium, is meaningless where such entities do not already exist, and where no sensory or actuative protocol has yet been provided. With respect to the origin of any self-determinative, perfectly self-contained system, the feedback is ontological in nature and therefore more than cybernetic. Accordingly, ontological feedback bears description as “precybernetic” or “metacybernetic”. Indeed, because of their particularly close relationship, the theories of information, computation and cybernetics are all in line for a convergent extension… an extension that can, in a reality-theoretic context, lay much of the groundwork for a convergent extension of all that is covered by their respective formalisms. ([7] Those who expect the field of cosmology to culminate in a grand reduction of the universe to pure information sometimes forget that this would merely transform a certain question, “what is the universe and where did it come from?”, to another no less vexing question, “what is information and where did it come from?”)

Ordinary feedback, describing the evolution of mechanical (and with somewhat less success, biological) systems, is cyclical or recursive. The system and its components repeatedly call on internal structures, routines and actuation mechanisms in order to acquire input, generate corresponding internal information, internally communicate and process this information, and evolve to appropriate states in light of input and programming. However, where the object is to describe the evolution of a system from a state in which there is no information or programming (information-processing syntax) at all, a new kind of feedback is required: telic feedback.

Diagram 2: The upper diagram illustrates ordinary cybernetic feedback between two information transducers exchanging and acting on information reflecting their internal states. The structure and behavior of each transducer conforms to a syntax, or set of structural and functional rules which determine how it behaves on a given input. To the extent that each transducer is either deterministic or nondeterministic (within the bounds of syntactic constraint), the system is either deterministic or “random up to determinacy”; there is no provision for self-causation below the systemic level. The lower diagram, which applies to coherent self-designing systems, illustrates a situation in which syntax and state are instead determined in tandem according to a generalized utility function assigning differential but intrinsically-scaled values to various possible syntax-state relationships. A combination of these two scenarios is partially illustrated in the upper diagram by the gray shadows within each transducer.
...
Syntactic Closure: The Metaphysical Autology Principle (MAP)

All relations, mappings and functions relevant to reality in a generalized effective sense, whether descriptive, definitive, compositional, attributive, nomological or interpretative, are generated, defined and parameterized within reality itself. In other words, reality comprises a “closed descriptive manifold” from which no essential predicate is omitted, and which thus contains no critical gap that leaves any essential aspect of structure unexplained. Any such gap would imply non-closure.

MAP, a theoretical refinement of the self-containment criterion set forth by the Reality Principle, extends the closure property of the definition of reality to the set of all real predicates. MAP effects closure on the definitive, descriptive, explanatory and interpretative levels of reality theory by making it take the form of a closed network of coupled definitions, descriptions, explanations and interpretations that refer to nothing external to reality itself. Another way to state this is that MAP, like the Reality Principle, requires that everything to which any reality-theoretic definition, description, explanation or interpretation refers be located within reality. This has the effect of making reality responsible for its own structure and evolution in the abstract and concrete senses.

MAP requires a closed-form explanation on the grounds that distinguishability is impossible without it. Again this comes down to the issue of syntactic stability.33 To state it in as simple a way as possible, reality must ultimately possess a stable 2-valued object-level distinction between that which it is and that which it is not, maintaining the necessary informational boundaries between objects, attributes and events. The existence of closed informational boundaries within a system is ultimately possible only by virtue of systemic closure under dualistic (explanans-explanandum) composition, which is just how it is effected in sentential logic.
As an example of the tautological nature of MAP, consider a hypothetical external scale of distance or duration in terms of which the absolute size or duration of the universe or its contents can be defined. Due to the analytic self-containment of reality, the functions and definitions comprising its self-descriptive manifold refer only to each other; anything not implicated in its syntactic network is irrelevant to structure and internally unrecognizable, while anything which is relevant is already an implicit ingredient of the network and need not be imported from outside. This implies that if the proposed scale is relevant, then it is not really external to reality; in fact, reality already contains it as an implication of its intrinsic structure.

In other words, because reality is defined on the mutual relevance of its essential parts and aspects, external and irrelevant are synonymous; if something is external to reality, then it is not included in the syntax of reality and is thus internally unrecognizable. It follows that with respect to that level of reality defined on relevance and recognition, there is no such thing as a “real but external” scale, and thus that the universe is externally undefined with respect to all measures including overall size and duration. If an absolute scale were ever to be internally recognizable as an ontological necessity, then this would simply imply the existence of a deeper level of reality to which the scale is intrinsic and by which it is itself intrinsically explained as a relative function of other ingredients. Thus, if the need for an absolute scale were ever to become recognizable within reality – that is, recognizable to reality itself - it would by definition be relative in the sense that it could be defined and explained in terms of other ingredients of reality. In this sense, MAP is a “general principle of relativity”. ([34] It is tempting to note that “relativity” means roughly the same thing in the General Theory of Relativity as it does here. That is, using a distributed-syntactic “tangent space”, the structure of spacetime is tensorially defined in terms of the masses and relative positions of its own material contents, resulting in an intrinsic MAP-like definition of spacetime. Unfortunately, the basic formalism of GR, differential geometry, is not self contained with respect to time; as currently formulated, it tacitly relies on an embedding (conspansive) medium to provide it with temporal potential.)
...
Cosmology, humanity’s grand attempt to explain the origin and nature of the universe, has traditionally amounted to the search for a set of “ultimate laws” capable of explaining not only how the universe currently functions, but how it came to be. Unfortunately, even were such a set of laws to be found, the associated explanation could not be considered adequate until the laws themselves were explained, along with the fundamental objects and attributes on which they act. This gives rise to what seems like an imponderable question: how can a set of laws, objects and attributes be explained except by invoking another set of laws in the form of an explanatory syntax that would itself demand an explanation, and so on ad infinitum?

The answer is hiding in the question. Laws do not stand on their own, but must be defined with respect to the objects and attributes on which they act and which they accept as parameters. Similarly, objects and attributes do not stand on their own, but must be defined with respect to the rules of structure, organization and transformation that govern them. It follows that the active medium of cross-definition possesses logical primacy over laws and arguments alike, and is thus pre-informational and pre-nomological in nature…i.e., telic. Telesis, which can be characterized as “infocognitive potential”, is the primordial active medium from which laws and their arguments and parameters emerge by mutual refinement or telic recursion.

Because circular arguments are self-justifying and resistant to falsification, it is frequently assumed that tautology and circular reasoning are absolute theoretical evils. But this is far from the case, for logic and mathematics are almost completely based on circularity. Truth and logical tautology, recursion and iteration, algebraic and topological closure…all involve it to some degree. The problems arise only when circular reasoning is employed without the assurance of full mathematical generality, incorporating false claims of universality on (what may be) non-universal premises.

Unfortunately, not even valid tautologies are embraced by the prevailing school of scientific philosophy, falsificationism. While non-universal tautologies are rightly forbidden due to their resistance to falsificative procedures that would reveal their limitations, universal tautologies are pronounced “scientifically uninteresting” for much the same reason. But in fact, science could exist in no way, shape or form without them. The very possibility of a scientific observation is utterly dependent on the existence of tautological forms on which to base a stable, invariant syntax of perception. This raises the possibility that falsificationist thinking has accidentally obscured the true place of tautological reasoning in cosmology.

If the universe is really circular enough to support some form of “anthropic” argument, its circularity must be defined and built into its structure in a logical and therefore universal and necessary way. The Telic principle simply asserts that this is the case; the most fundamental imperative of reality is such as to force on it a supertautological, conspansive structure. Thus, the universe “selects itself” from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general.

The Telic Principle differs from anthropic principles in several important ways. First, it is accompanied by supporting principles and models which show that the universe possesses the necessary degree of circularity, particularly with respect to time. In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely-separated events, lets the universe “retrodict” itself through meaningful cross-temporal feedback.

Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility.

In keeping with its clear teleological import, the Telic Principle is not without what might be described as theological ramifications. For example, certain properties of the reflexive, self-contained language of reality – that it is syntactically self-distributed, self-reading, and coherently self-configuring and self-processing – respectively correspond to the traditional theological properties omnipresence, omniscience and omnipotence. While the kind of theology that this entails neither requires nor supports the intercession of any “supernatural” being external to the real universe itself, it does support the existence of a supraphysical being (the SCSPL global operator-designer) capable of bringing more to bear on localized physical contexts than meets the casual eye. And because the physical (directly observable) part of reality is logically inadequate to explain its own genesis, maintenance, evolution or consistency, it alone is incapable of properly containing the being in question.
...
In a Venn diagram, circles represent sets through their definitive attributes. The attributes represented by the circles are synetic (syntactically distributed and homogeneous with respect to potential differences of state), and the attribute represented by a particular circle is uniformly heritable by the elements of the set represented by any circle inside it. In the spatiotemporal Venn diagram just described, the circular lightcone cross sections correspond to objects and events relating in just this way. Because quantum-scale objects are seen to exist only when they are participating in observational events, including their “generalized observations” of each other, their worldlines are merely assumed to exist between events and are in fact syntactically retrodicted, along with the continuum, from the last events in which they are known to have participated. This makes it possible to omit specific worldlines entirely, replacing them with series of Venn diagrams in which circles inner-expand, interpenetrate and “collapse to points” at each interactive generalized-observational event. This scenario is general, applying even to macroscopic objects consisting of many particles of matter; the higher definition of the worldlines of macroscopic objects can be imputed to a higher frequency of collapse due to interactive density among their constituent particles.

The areas inside the circles correspond to event potentials, and where events are governed by the laws of physics, to potential instantiations of physical law or “nomological syntax”. Where each circle corresponds to two or more objects, it comprises object potentials as well. That is, the circular boundaries of the Venn circles can be construed as those of “potentialized” objects in the process of absorbing their spatiotemporal neighborhoods. Since the event potentials and object potentials coincide, potential instantiations of law can be said to reside “inside” the objects, and can thus be regarded as functions of their internal rules or “object syntaxes”. Objects thus become syntactic operators, and events become intersections of nomological syntax in the common value of an observable state parameter, position. The circle corresponding to the new event represents an attribute consisting of all associated nomological relationships appropriate to the nature of the interaction including conserved aggregates, and forms a pointwise (statewise) “syntactic covering” for all subsequent potentials.

Notice that in this scenario, spacetime evolves linguistically rather than geometrodynamically. Although each Venn circle seems to expand continuously, its content is unchanging; its associated attribute remains static pending subsequent events involving the objects that created it. Since nothing actually changes until a new event is “substituted” for the one previous, i.e. until a new circle appears within the old one by syntactic embedment, the circles are intrinsically undefined in duration and are thus intrinsically atemporal. Time arises strictly as an ordinal relationship among circles rather than within circles themselves. With respect to time-invariant elements of syntax active in any given state (circle), the distinction between zero and nonzero duration is intrinsically meaningless; such elements are heritable under substitution and become syntactic ingredients of subsequent states. Because each circle is structurally self-distributed, nothing need be transmitted from one part of it to another; locality constraints arise only with respect to additional invariants differentially activated within circles that represent subsequent states and break the hological symmetry of their antecedents. Conspansion thus affords a certain amount of relief from problems associated with so-called “quantum nonlocality”.

Because the shrinkage of an object within its prior image amounts to a form of logical substitution in which the object is Venn-diagrammatically “described” or determined by its former state, there is no way to distinguish between outward systemic expansion and inward substitution of content, or between the associated dynamical and logical “grammars”. This is merely a restatement of attributive duality; topological containment relations among point-sets are equivalent to descriptively predicating truth of statements asserting containment, and on distribution relationships among state-descriptors. In conjunction with the intrinsic symmetry of externally undefined systems, attributive duality eliminates any possible logical or geometric distinction between the outward expansion of a self-contained universe as its contents remain static in size, and a logical endomorphism in which the universe remains static while the states of its contents are recursively substituted for previous states.

It has already been noted in connection with MAP that where the external dimensions of a system are undefined, no distinction as to size can be made beyond the size ratio of the system to its contents. Consider a simple arithmetical analogy: 1/2 = 1000/2000 = 1(109999)/2(109999) = (…). Where the numerator and denominator of a fraction are both multiplied by a given number, the value of the fraction does not change; it is independent of distinctions involving the size of the multiplier. Similarly, the intrinsic proportionality of a self-contained system is independent of distinctions involving any external measure. This implies that with respect to a self-contained universe for which no external measure exists, no distinction can be made between the expansion of the system with respect to its contents, and the shrinkage of its contents with respect to it. In fact, because that which is undefined cannot change – there is nothing definite with respect to which change would be possible – apparent expansion of the container cannot be extrinsically defined, but implies a conspansively-equivalent intrinsic shrinkage of its contents.

Thus, conspansive duality relates two complementary views of the universe, one based on the external (relative) states of a set of objects, and one based on the internal structures and dynamics of objects considered as language processors. The former, which depicts the universe as it is usually understood in physics and cosmology, is called ERSU, short for Expanding Rubber Sheet Universe, while the latter is called USRE (ERSU spelled backwards), short for Universe as a Self-Representational Entity. Simplistically, ERSU is like a set, specifically a topological-geometric point set, while USRE is like a self-descriptive nomological language. Whereas ERSU expands relative to the invariant sizes of its contents, USRE “conspands”, holding the size of the universe invariant while allowing object sizes and time scales to shrink in mutual proportion, thus preserving general covariance.

This has certain interesting implications. First, whereas it is ordinarily assumed that the sizes of material objects remain fixed while that of the whole universe “ectomorphically” changes around them, conspansion holds the size of the universe changeless and endomorphically changes the sizes of objects. Because the universe now plays the role of invariant, there exists a global standard rate of inner expansion or mutual absorption among the contents of the universe (“c-invariance”), and due to syntactic covariance, objects must be resized or “requantized” with each new event according to a constant (time-independent) rescaling factor residing in global syntax. Second, because the rate of shrinkage is a constant function of a changing size ratio, the universe appears from an internal vantage to be accelerating in its “expansion”, leading to the conspansive dual of a positive cosmological constant.
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf

"Question: If God does in fact continuously create reality on a global level such that all prior structure must be relativized and reconfigured, is there any room for free-will?

Answer: Yes, but we need to understand that free will is stratified. As a matter of ontological necessity, God, being ultimately identified with UBT, has "free will" on the teleological level...i.e., has a stratified choice function with many levels of coherence, up to the global level (which can take all lower levels as parameters). Because SCSPL local processing necessarily mirrors global processing - there is no other form which it can take - secondary telors also possess free will. In the CTMU, free will equates to self-determinacy, which characterizes a closed stratified grammar with syntactic and telic-recursive levels; SCSPL telors cumulatively bind the infocognitive potential of the ontic groundstate on these levels as it is progressively "exposed" at the constant distributed rate of conspansion."
http://ctmucommunity.org/wiki/God

"Accordingly, the universe as a whole must be treated as a static domain whose self and contents cannot “expand”, but only seem to expand because they are undergoing internal rescaling as a function of SCSPL grammar. The universe is not actually expanding in any absolute, externally-measurable sense; rather, its contents are shrinking relative to it, and to maintain local geometric and dynamical consistency, it appears to expand relative to them.

Already introduced as conspansion (contraction qua expansion), this process reduces physical change to a form of "grammatical substitution" in which the geometrodynamic state of a spatial relation is differentially expressed within an ambient cognitive image of its previous state. By running this scenario backwards and regressing through time, we eventually arrive at the source of geometrodynamic and quantum-theoretic reality: a primeval conspansive domain consisting of pure physical potential embodied in the self-distributed "infocognitive syntax" of the physical universe…i.e., the laws of physics, which in turn reside in the more general HCS.

Conspansion consists of two complementary processes, requantization and inner expansion. Requantization downsizes the content of Planck’s constant by applying a quantized scaling factor to successive layers of space corresponding to levels of distributed parallel computation. This inverse scaling factor 1/R is just the reciprocal of the cosmological scaling factor R, the ratio of the current apparent size d n(U) of the expanding universe to its original (Higgs condensation) size d 0(U)=1. Meanwhile, inner expansion outwardly distributes the images of past events at the speed of light within progressively-requantized layers. As layers are rescaled, the rate of inner expansion, and the speed and wavelength of light, change with respect to d 0(U) so that relationships among basic physical processes do not change…i.e., so as to effect nomological covariance. The thrust is to relativize space and time measurements so that spatial relations have different diameters and rates of diametric change from different spacetime vantages. This merely continues a long tradition in physics; just as Galileo relativized motion and Einstein relativized distances and durations to explain gravity, this is a relativization for conspansive “antigravity” (see Appendix B).

Conspansion is not just a physical operation, but a logical one as well. Because physical objects unambiguously maintain their identities and physical properties as spacetime evolves, spacetime must directly obey the rules of 2VL (2-valued logic distinguishing what is true from what is false). Spacetime evolution can thus be straightforwardly depicted by Venn diagrams in which the truth attribute, a high-order metapredicate of any physical predicate, corresponds to topological inclusion in a spatial domain corresponding to specific physical attributes. I.e., to be true, an effect must be not only logically but topologically contained by the cause; to inherit properties determined by an antecedent event, objects involved in consequent events must appear within its logical and spatiotemporal image. In short, logic equals spacetime topology.

This 2VL rule, which governs the relationship between the Space-Time-Object and Logico-Mathematical subsyntaxes of the HCS, follows from the dual relationship between set theory and semantics, whereby predicating membership in a set corresponds to attributing a property defined on or defining the set. The property is a “qualitative space” topologically containing that to which it is logically attributed. Since the laws of nature could not apply if the sets that contain their arguments and the properties that serve as their parameters were not mutually present at the place and time of application, and since QM blurs the point of application into a region of distributive spacetime potential, events governed by natural laws must occur within a region of spacetime over which their parameters are distributed."
http://www.megafoundation.org/CTMU/Articles/Supernova.html

"Substitute: ▸ noun: a person or thing that takes or can take the place of another"

"An isotelic vitamin or food factor would be, according to his definition, a factor that can replace another in a given diet or nutrient media, for some specified species, or under a given set of circumstances."
http://tinyurl.com/4cchgw3

Isotelic: Used of an object e.g. a muscle without an obvious difference in the shape of its ends. The established use is of things e.g. food factors, that can replace one another.
http://tinyurl.com/47r8cy5

"In connexion with colour, particular attention is paid to "isotely" or the attainment of a similar result in different ways."
http://tinyurl.com/4nkzkte

Isotely: parallel development or convergence
http://tinyurl.com/4fff6va
http://tinyurl.com/4pznlar
http://tinyurl.com/4wh9xw2

Isotely: hitting the same mark
http://tinyurl.com/4bcecml

"Slinging Arrows
...
Briefly put, I hypothesise that rather than isomorphic, internal representations are "iso-telic", simulating the purpose of the system, the end state to be achieved and the actions required to that end."
http://tinyurl.com/4okw87y

Homoplastic (Syn. Isotelic)

Convergence, i.e. not inherited, correspondence between parts.
http://tinyurl.com/4vjqoow

"qualitatively different communication mechanisms may have evolved which serve the same general function -- this is the principle of isotelic mechanisms, of which many examples appear in the biological sciences."
http://tinyurl.com/4kg4xg4

"The sort of flexibility in determining behavior that a simulating, imaginative brain gives an animal, said Dawkins, makes for a double-edged sword. With flexibility comes the ability to subvert the adaptive archi-purpose underlying the brain’s functionality. Why do humans seek hedonistic pleasure? Why do humans not work more diligently at propagating their genes? It is the nature of brain flexibility that makes subversion of that archi-purpose possible, such that re-programming of the brain can happen.

But we know, said Dawkins, that there is both flexibility and inflexibility involved. The flexibility to set a new goal, a neo-purpose, can be coupled with the inflexible drive to pursue that goal that was originally part of the adaptive archi-purpose program. The new, neo-purpose, goal can be pursued over long periods, even a lifetime, in service of religious, military, or political ideas. There can be continued flexibility in setting up sub-goals, and sub-sub-goals, in service of the inflexibly held neo-purpose goal. It is important to understand this hierarchy of goal-seeking, and the capacity to set short-term goals in service of long-term goals. Humans provide the most obvious examples of subversion of goals. Humans bred sheepdogs for herding sheep, yet what one sees in the herding behavior is an altered or reprogrammed version of the stalking behavior of wolves. To take a fictional example, Dawkins used the movie, “The Bridge Over the River Kwai” and the character of Colonel Nicholson, whose obsession with proving the industry and ingenuity of the Western mindset subverted the goal of firmly opposing the war efforts of the enemy.

Dawkins named a number of archi-purposes that provided “subversion fodder”: hunger, sex, parental care, kinship, filial obedience, and others. We evolved under conditions where sugar and fat marked high-quality food sources, and poor food availability meant we tended to eat obsessively when food sources were available. But today for western culture, food is always available, and we do damage to our teeth and our health via over-indulgence. For the subversion of sex, Dawkins showed a photograph of a moose mounting a statue of a bison. Once the audience had gotten a laugh out of that, the next slide showed a scantily clad human female model, and Dawkins said, “At least the bison statue was in 3D.” Contraception forms a subversion of the archi-purpose of sex. Notes Steven Pinker’s quote, “My genes can go jump in the lake.” We adopt kittens and puppies.

Filial obedience is subverted as in “God the father”, and the elevation of other father-figures. Subversion of kinship occurs, too, we are keenly aware of our kin relations, and may be said to be obsessed with kinship. This subversion occurs via fictive kin, subverting kinship loyalty. In-group loyalty and out-group hostility utilize fictive kin to cement those new allegiances. Religion consistently uses fictive kin rhetoric. Entire nations can be viewed as using fictive kin relationships. This is especially dangerous when an implacable faith is involved, as the 9/11 terrorists demonstrated.

But there is a good side to the subversion of purpose. It can be exhilarating. Our species is likely young in its liberation from the strictures of archi-purposes. The steps from the invention of the wheel to that of airliners and space shuttles have proceeded rapidly. Cultural evolution is speedy, whether we pursue things beneficial or the sub-goals of war. But the flexibility we have gives us grounds for hope."
http://austringer.net/wp/index.php/2009/03/09/richard-dawkins-and-the-purpose-of-purpose/

4.3 'Recursive' teleology

By allowing tools to be conceived of as desirable goals, 'inverse' teleology opened up the way to attaching functions to tools that did not directly involve outcomes that enhance the adaptive fitness of the individual (such as food). This, in turn, made it possible to apply teleological reasoning in a 'recursive' manner by looking at objects as potential tools to create desired tools as their goals. Therefore, the emerging ability to conceive tools with useful functions as themselves being the goal
objects of desire led to the capacity of thinking in terms of chains or hierarchies of means and ends. The archaeological record, in fact, provides evidence of the early use of tools to make other tools, manifesting the presence of 'recursive'
teleological competence early on in hominid history."
http://web.ceu.hu/phil/gergely/papers/2006_csibra_gergely_Soc_Learn_Soc_Cogn.pdf

"4. 1. Teleology ‘ungrounded’: Differences between human versus primate understanding
of instrumental agency

As argued earlier, sensitivity to efficiency of goal-directed actions seems present in nonhuman primates (Buttleman et al., 2007, 2008; M. Rochat et al., 2008; Wood et al., 2007; Uller, 2004). Primates also exhibit some level of teleo-functional understanding of means-end relations and physical affordance properties of objects that they use as temporary ‘tools’. For example, from objects (such as stones or sticks) lying around a locally visible goal, they can choose as their ‘tool’ the physically most affordant one to achieve the goal. Apes can also make simple functional modifications in the physical properties of such objects to make them more affordant when guided by the visible properties of a locally present goal (Boesch and Boesch, 1993; Goodall 1986; Tomasello and Call 1997). However, after having used the object to attain their goal, they tend to simply discard it. This can be considered evidence for a goal-activated, situationally restricted and temporally fleeting 'teleological mode' of object construal.

This ‘primate teleology’, however, is arguably severely restricted in at least three
important respects when compared to the teleological action interpretation system of
human infants (Gergely & Csibra, 2003) or, for that matter, to the teleo-functional
reasoning skills and representational abilities of our hominid ancestors that we can
reconstruct from the archeological evidence on hominid tool use and tool manufacturing practices (Mithen, 1996, 2002; McGrew, 1996, 2004; Semaw, 2000; Schick & Toth, 1993; Gergely & Csibra, 2006). First, teleological inferencing is unidirectional in primates: it is restricted to goal-to-means inferences that are guided by the visual properties of a locally present goal. Second, primate teleology has restricted input conditions: it is activated only in the visual presence of specific goals. Third, the representational contents of goals are likely to be anchored to specific content-domains such as food, mating partners, predator avoidance, etc.

In contrast, the archeological record implies that our hominid ancestors already
possessed a teleo-functional reasoning system that was inferentially multi-directional allowing not only for goal-triggered local and transient inferences from goal-to-means, but also for applying the teleological stance in the absence of visible goals to examine and contemplate physical objects as potential tools. This required the “inverse” direction of teleological reasoning: making inferences from means-to-goal. The “inverse” inferential use of teleology entails that a remarkable cognitive transformation must have taken place in the implementation of our inherited teleological system during our hominid evolutionary past. This has “ungrounded” the ancient teleological system we shared with our primate ancestors, which had been evolutionarily anchored as a goal-triggered unidirectional inferential mechanism. In other words, the resulting systematization of teleological inferencing must have involved a cognitive “reversal of teleological perspective” that allowed our hominid ancestors to create new tools in the absence of locally visible goals and to represent such tools as having permanent functions (Csibra & Gergely, 2006).

As evidenced in the archaeological record, this led to the enduring teleo-functional
conceptualization of objects as tools, which was manifested in routine behaviors such as keeping tools instead of discarding them after use, storing them at specific locations, or prefabricating the tools at one location and carrying them for long distances for their functional application at a different place ). This apparently human-specific systematization of teleological reasoning allowed hominids to take an active teleofunctional stance towards novel physical objects and materials found at new and distant locations, whose sight could activate the question: “What purpose could I use this object for?” (Csibra & Gergely, 2007). This “inverse” teleological inferencing required imaginatively ‘conjuring up’ memory representations of potential goals in relation to which the physical-causal properties of the object could be mentally evaluated in terms of the efficiency principle and represented as affordance properties of a potential tool.

The cognitive reversal of teleological perspective also provided the
representational and inferential preconditions for the emergence of “recursive” teleology (Csibra & Gergely, 2006) in hominid artefact culture that involved using tools to make other tools without the visible presence of local goals (Schick & Toth, 1993). For this to come about a further significant cognitive transformation of teleology was necessary: the representation of the concept of goals had to become “ungrounded” from the domain specific content constraints that the restricted range of goals representable by primate teleology had been evolutionarily anchored to. In other words, to conceptualize the hierarchical structure of complex chains of means-end skills that “recursive” tool use requires, and to flexibly generate and represent a variety of sub-goal-final-goal structural sequences, goal representations had to become abstract and content free. This allowed for the specific content represented as a goal state to be identified solely as a function of the successful application of the efficiency requirements of the teleological principle of rational action."
http://web.ceu.hu/phil/gergely/papers/Gergely_KindofAgents.pdf

"To summarize, their conjecture is that early hominids succeeded in inverting nonhuman primates teleological perspective, and began to reason about the goals that a particular object could facilitate (including goals that were not currently applicable) rather than only about the object needed to accomplish a particular, immediate goal. This inverse teleology allowed early hominids to stably represent tools as such -- as objects with a fixed functional purpose. The stable representation of tools in turn enabled recursive teleology, or the ability to conceptualize objects as tools for making other tools. It is this recursive teleology, and the immense space of possible tools that it opens, that is argued to be the key feature differentiating our own tool use from that of other primates."
http://tinyurl.com/mirrorneuronsystems

RECURSIVE AND TELEOLOGICAL RATIONALITY INVOLVED IN THE MODELING PROCESS OF
SELF - ORGANIZING SOCIO - ECONOMIC SYSTEMS

The modeling of socio - economic systems, seen as a cognitive process, involves some forms of procedural reasoning which, in practice, differ from the familiar linear, deductive or substantive, syllogistic formal reasoning processes. It needs some forms of recursive reasoning, where the operator is transformed by the result of its previous operations. And some forms of teleological reasoning, where the choice of the means to reach an end transforms this end along times, and where this new end suggests frequently some new means, when it becomes explicit : "Searching is the end", says H.A.Simon ("Reason in human affairs",1983).

Since E. Kant ("the Third Critics", 1793), we are accustomed to understand and to interpret those forms of recursive and teleological rationality in terms of "appropriate deliberation", but during a long time, economists as many scientists, have not seen them as "correct methods" for the production of "scientific propositions" in order to work on the modeling and the reasoning processes of socio economic systems. If we consider to day that a "well founded argumentation" is at least as epistemologically correct as a "formal demonstration" (and perhaps more), we can identify some forms of rhetorical and dialectical reasoning, which help us to design and to interpret, in reflexive and teleological terms, the evolving and self - organizing socio economics systems in which we are acting.
**********************
"…Each step of implementation created a new situation; and the new situation provided a starting point for fresh activity"
H. A. Simon

Semantic Service Substitution in Pervasive Environments

A computing infrastructure where "everything is a service" offers many new system and application possibilities. Among the main challenges, however, is the issue of service substitution for the application execution in such heterogeneous environments.
http://tinyurl.com/4cs5p8m

Logic Programming, Substitution Semantics:
http://www.cs.cmu.edu/~fp/courses/lp/lectures/16-subst.pdf

"Means Substitution

A characteristic of goal pursuit common to all types of goals is the possibility of being thwarted on the way to the goal. Whether caused by an unforseen hinderance or simply a bad strategy, we are often forced to give up on certain means and consider other options instead. This is the fundamental problem of substitution commented on by classical motivational theorists such as Freud (1923/1961) and Lewin (1935). Both choice and substitution relate to equifinality, but emphasize different aspects of this configuration: the problem of choice depends on how the activities differ, so that a rational selection among them would be possible. By contrast substitution revolves on how the activities are the same, so that one can replace one with another.

Lewin (1935) noted that the substitution of one activity for another was possible only if both arose from the same goal tension system. That is, both must ease the tension caused by the need underlying goal pursuit. According to Lewin, the substitute value of one task for another, then, depends on their dynamic connection within this system. From our structural perspective, the choice of replacement means depends on its common association with goal attainment. Despite differences in content, an activity is considered a substitute to the extent that it is perceived as a means to the same focal goal."
http://tinyurl.com/4heauzb

"The non-substitutability criteria becomes even less operational when it comes to judging what type of processes and what functional areas can substitute for each other. For instance, whether radical changes in the marketing methods or product design could be a substitute for some components of the HRM system, or for that matter whether or not potential substitutes to a resource may originate in companies with which the firm has at least a remote connection (Simon 1993:135-6). Detecting non-substitutability simply demands excessive ex ante cognitive abilities. As Eisenhardt and Martin (2001) put it "equifinality, renders inimitability and immobility irrelevant to sustained advantage” (p. 1110). It is clear that, from this point of view, firms operate with a more modest definition of sustained competitive advantage than indicated by Barney (1991). Given bounded rationality it is difficult to imagine that firms are able to identify “any current and potential competitors” and to detect whether “efforts [by other companies] to duplicate that advantage have ceased” (Barney 1991:102, emphasis added). In fact, the very same causal ambiguity that hinders competitors’ knowledge of the sources of competitive advantage also surrounds the ability of the leading firm to have a comprehensive understanding of the counter-moves taken by competitors. In the end, even if facts could be firmly established and the cessation of imitation processes could be declared, given equifinality, this finding is uninteresting. In this case, the relevant question would be why did efforts for duplication cease."
http://mercatus.org/uploadedFiles/Mercatus/Publications/Human%20Resources.pdf

"Contemporary innovation policies focus on the flows of knowledge, (which may or may not be path dependent and equifinal) but we need more knowledge on how innovation policy has actually be designed and implemented and the forces that govern these activities (Edquist, 2001).
http://www2.druid.dk/conferences/viewpaper.php?id=501897&cf=43

Equifinality: Functional Equivalence in Organization Design
http://www.jstor.org/pss/259328

On Substitution Between the Goals of Working Groups:
http://www.jstor.org/pss/587584
...
Environmental impact minimisation through material substitution: A multi-objective optimisation approach

"This paper presents an approach for identifying improved business and environmental performance of a process design. The design task is formulated as a multi-objective optimisation problem where consideration is given to not only the traditional economic criteria, but also to the multiple environmental concerns. Finally, using plant-wide substitution of alternative materials as an example, it is shown how potential stepchange improvements in both life cycle environmental impact and process economics can be achieved."
http://dx.doi.org/10.1016/S1570-7946(03)80195-5

Multi-Criteria Decision Making in Strategic Reservoir Planning Using the Analytic Hierarchy Process
...
"Uncertainties in Strategic Planning

Strategic planning is necessary for the proper development and management of any oil producing asset. However, there are many sources of uncertainty in any planning exercise. Uncertainties in input parameters, such as economic factors and production potentials, lead to uncertainties in predicted results. There are well documented means of addressing these uncertainties.

We do not often give cognizance to the uncertainties in our planning parameters themselves, however, or test our strategic plans given different strategic objectives. It is not always clear what is the appropriate objective for our strategic planning. The executives of publicly traded companies will say it is to maximize NPV (net present value), or a similar metric. However, even publicly traded companies are subject to demands that may not be congruent to maximizing NPV, health, safety and the environment objectives, community improvement, security of management tenure, employee job security, investor interests, and other objectives, all have their constituencies. A system that provides a conceptually consistent means of weighting the set of appropriate objectives for any development would allow the explicit incorporation of the potentially disparate objectives into the ranking of projects. In addition, if we are uncertain as to what is the proper strategic objective, or if the objective changes due to changing circumstances, it would be beneficial to have a system that allows easy substitution of objectives for reevaluation. We could then determine how robust is our plan."
http://www.onepetro.org/mslib/servlet/onepetropreview?id=00071413&soc=SPESee More

How many roads lead to Rome? Equifinality set-size and commitment to goals and means

Four studies examined the relation between the number of equifinal means to a goal, actors' commitment to that goal, and their commitment to the means. In Study 1, participants freely generated varying number of means to two of their work goals. In Study 2, they generated social means to their goals (people they viewed as helpful to goal attainment). In Studies 3 and 4, the number of means to participants' goals was experimentally manipulated. All four studies found that means commitment is negatively related, whereas goal commitment is positively related, to means number. Consistent support was also obtained for the notion that the relation between means number and goal commitment is mediated by the expectancy of goal attainment, and by goal importance. Conceptual and practical implications of the findings were considered that link together the notions of substitutability and dependency within a goal systemic framework.
http://onlinelibrary.wiley.com/doi/10.1002/ejsp.780/full

"A cross-functional team is a group of people with different functional expertise working toward a common goal. It may include people from finance, marketing, operations, and human resources departments. Typically, it includes employees from all levels of an organization. Members may also come from outside an organization."
http://en.wikipedia.org/wiki/Cross-functional_team

‎"To meet the requirements of emerging electronic commerce applications there is a need to integrate aspects from Workflow Management, World Wide Web, Transaction Management, and Warehousing technology. We have designed and implemented Worl...dFlow for this purpose. It is a system for building transactional workflows that access Web based and non-Web based information sources. This paper introduces the WorldFlow system and highlights some of the issues that arise in implementing it. It also presents the architecture of WorldFlow and briefly discusses its implementation."
http://www-ccs.cs.umass.edu/db/publications/hpts97.html

BLENDING DECISION AND DESIGN SCIENCE WITH INFORMATION SYSTEMS DESIGN AS A MEANS TO OPTIMIZING BUSINESS DECISIONS

"While information systems are bridges that connect functional areas of an organization, the underlying purpose and value in that connectivity is to improve decision making on all levels of the organization. It stands to reason that if organizational decision makers were afforded the use of more intelligent information systems, that decision making would improve, and thrust the organization into a position of competitive advantage. Based on analyses conducted in 2004, in which investigators blended similar disciplines in order to build a conceptual framework and guidelines to ensure research quality and rigor, this proposition for further research purports an integration of these same disciplines to be used in the design of organizational information systems. The premise is that a more intelligent design could offer a higher level of pragmatic effectiveness in the system that is implemented [18].

Intelligence in design, here is twofold; one being a more naturalistic(the way decisions are actually made as opposed to how they should be made) presentation of information offered; and second, that the presentation is in a higher level of context (less information but more closely related to the actual decision at hand) [19, 14].

Keywords: Information Systems (IS), Decision Making, Design Science and Decision Science"
http://www.iacis.org/iis/2009_iis/pdf/P2009_1317.pdf

"COORDINATION, COMMUNICATION, MEANING

Organizations exist for performing tasks that exceed individual capacities (Penrose,
1959;Kogut, and Zander, 1996) or can be performed more efficiently by involving
multiple persons (Smith, 1793). This performance materializes in actions that are
achieved in a concerted fashion, i.e., they are coordinated (Thompson, 1967;Clark, 1996).

A question that is raised time and again is how orderly action emerges. Tsoukas (1996:90) writes at the end of his article on the firm as a distributed knowledge system: “What needs to be explained is … how … in a distributed knowledge system coherent action emerges over time”. Thompson’s (1967) defines coordination as the process of achieving concerted action. Contingency theorists have contributed to this field of inquiry with their introduction of coordination mechanisms with varying information processing capacity (hierarchy and plans versus mutual adjustment) (Galbraith, 1973;Van de Ven, et al., 1976). In the 1990s, the MIT Center for Coordination Science has expanded these ideas to include various forms of dependencies (Malone, and Crowston, 1994), and suggestions for designing organizational procedures (Crowston, 1997;Malone, et al., 1999).

Hutchins (1990: 209) refers to “the global structure of the task performance will emerge from the local interactions of the members.” Donnellon, Gray and Bougon (1986) prefer the word equifinal meaning over shared meaning, because research suggests that even though people do not ‘share’ meaning they can coordinate their actions. Equifinal implies a more modest notion. It communicates the idea that minimal overlap suffices. Meaning at the individual level may be very different, but sufficiently similar for coordination (Donnellon, et al., 1986;Weick, 1995).

In Cramton’s (2001) study on international teams, she refers to Blakar’s research on
communication (Blakar, 1973;Hultberg, et al., 1980;Blakar, 1984) and his concept of
‘shared social reality’. The following description clarifies the idea of (here: absence of) shared social reality in Blakar et.al.’s research setup: ‘In the studies, pairs of family members are given maps of a city. One subject’s map contains arrows that mark a route through the city. This subject is told to describe the route to his or her partner so that the partner can follow the route on his or her own map. Unbeknownst to the subjects, their maps differ in key respects, making it impossible for them to carry out the task successfully’ (Cramton, 2001). According to Polanyi (1975: 208), people must mutually adjust to ‘move(s) to higher and higher levels of meaning’ (Polanyi, 1975)."
http://www.gredeg.cnrs.fr/routines/workshop/papers/Fenema.pdf

"Equifinal Meaning

There are two primary schools of thought around how teams interpret ambiguous information, make sense of them, and translate them into action. The first school argues that organizational action is the result of consensus and shared meaning among the actors (Pfeffer & Lammerding, 1981; Smircich & Morgan, 1982) The second school argues that consensus is not essential and organizational actors need to share “just enough” to keep the economic exchange (i.e., work for pay) moving forward (Weick, 1979: 91). Donnellon, et al. (1986) integrate these two views and suggest that, with effective communication, organized action can occur even when there are differences of interpretation among organizational members. Donnellon, et al. (1986) draw on the work of Beer (1959) and introduce a term called “equifinal meaning” that allows for organizational action to move forward despite differences in interpretation:

Equifinal meanings, then, are interpretations that are dissimilar but that have similar behavioral implications. When organized action follows the expression of such dissimilar interpretations, we refer to these interpretations as equifinal meanings. That is, organization members may have different reasons for undertaking the action and different interpretations of the action’s potential outcomes, but they nonetheless act in an organized manner (p. 44).

They reference an earlier study (Gray, Bougon, & Donnellon, 1985) and make an additional important point that equifinal meanings are “precarious” and “may not survive post-action sensemaking”:

Unlike the implication of the shared-meaning perspective that organizations are based on a relatively stable set of widely shared interpretations, our perspective recognizes that the equifinal meanings developed in the process of communicating about organized action are precarious and may in fact not survive post-action sense making. Our data suggest that meaning and actions are related in a complex iterative process in which meanings are continually constructed and destroyed as more sense-making communication occurs and new actions are taken (p. 53).

This brief discussion hints at the fact that there are more complex, iterative processes at work during the interpretation phase than is suggested by the Thomas, et al. (1993) model.

The next section introduces one additional construct around information arbitrage that could be helpful in examining the dynamics of the interpretation process."
http://weatherhead.case.edu/academics/doctorate/management/research/files/year2/Seshadri%20Qualitative%20Paper%20Formatted%202-1-10.pdf

"The concept of goal-directed action, often referred to as agency, seems to derive from two kinds of observation. One is of direct paths, wherein an object starts by itself and goes straight to an object or location rather than taking an (unnecessarily) indirect route. It is possible that DIRECT PATH is an output of PMA, but the limited evidence we have (Csibra et al., 2003) suggests sensitivity to this kind of goal path only by 12 months. The other kind of observation is of repeated paths ending at the same place, involving equifinal variation). This term refers to changes in the shape of a path (or action at the end of a path) as a function of the specifics of the physical situation, such as a barrier that blocks motion and is jumped over or gone around (Biro & Leslie, 2007; Csibra, 2008). Responsivity to equifinal variation is likely at first to be limited to paths rather than actions, which require more detailed analysis. This kind of equifinal variation can be characterized as a goal related version of LINKED PATHS, that is, paths that vary contingently upon things between their beginning and end points."
http://www.cogsci.ucsd.edu/~coulson/spaces/SpatialFoundationsInPress.pdf
http://www.reference-global.com/doi/abs/10.1515/LANGCOG.2010.002

"There is consequently a race between different types or categories of organizations, who confront choices between imitation and innovation. When faced with a key performance imperative that challenges their survival, firms strive to outrun one another on that performance dimension, ultimately resulting in ‘equifinality’. Defined by Katz and Kahn (1978), equifinality occurs when a system arrives at the same final state despite different initial conditions and the variety of subsequent paths. Given our interest in configurations of conditions that predict performance, we define equifinality around the idea that different parameter sets are similar (i.e. equifinal) in producing acceptably good model predictions (see Beven, 2006, for a discussion in the context of Bayesian estimations)."

Viewed simplistically, the Katz and Kahn definition of equifinality presumes that any
initial configuration and any path sequence leads to a single equifinal outcome. From this perspective, any adaptive system will follow a linear hill-climbing algorithm to choose ever-‘better’ practices that lead incrementally and additively to better performance and eventually to a best performance. For example, a deterministic process for organizations that consists of five independent parts (e.g. piecework pay, hierarchy, etc.) would climb monotonically a gradient to the global optimal configuration. If the configuration were not optimal, the organization could change a given practice, accept the change if performance improved, and reject it if it did not. By serially testing each of the five practices, the organization would arrive at the optimum. Since it would not matter which practice needs to be tested first, there is no path dependence.

Such first-order adaptation might be effective in loosely coupled modular structures that are amenable to effective analysis and modifications (Ethiraj and Levinthal, 2004). However, even if modular, organizations are far more likely to consist of complex interactions among organizational parts, whose performance will be difficult to predict without complete knowledge of the system. In this case, isomorphic pressures to adopt one-best-way may be detrimental to lagging firms, some of whom will instead choose to innovate in reference to their existing practices. Hence we can anticipate that different prototypical configurations will be discovered and some may be equifinal, consistent with contingency theory predictions that firms will choose different, but functionally equivalent solutions when faced with environmental demands (Gresov and Drazin, 1997)."
http://mitsloan.mit.edu/iwer/pdf/John_Paul_MacDuffie-IWER_paper_20100928.pdf

"Professor Laughlin explores the inherent conflict between the government's efforts to support and protect the commercialization of Intellectual Property and the scientific researcher's need for free access to information in order to expand our knowledge in critical areas. Will ignorance be the price we inadvertently pay for safety and commerce?"
http://itc.conversationsnetwork.org/shows/detail3963.html

"A resource has low substitutability if there are few, if any, strategically equivalent resources that are, themselves, rare and inimitable (Amit and Schoemaker 1993; Black and Boal 1994; Collis and Montgomery 1995). Firms may find, for example, that excellence in IS product development, systems integration, or environmental
scanning may be achieved through a number of equifinal paths."
http://skip.ssb.yorku.ca/SSB-Extra/Faculty.nsf/01e63d7ae85fd0af85257355004e88d2/70f848f125110db085256bc1004ae0b6/$FILE/ATTT8FQB/mwade%20MISQ.pdf

‎"Unification, or solving equations, is a fundamental process in many areas of computer science. It is in the heart of the applications such as logic programming, automated reasoning, natural language processing, information retrieval, rewriting and completion, type checking, etc.

The course on Unification Theory is intended to be an introductory course covering the following topics: syntactic unification, Robinson's algorithm, improved algorithms for syntactic unification (space efficient, quadratic, almost linear), unification in equational theories, higher-order unification, matching, applications of unification."
http://www.risc.jku.at/education/courses/ss2008/unification/

"In most applications of unification, one is not just interested in the decision problem for unification, which simply asks for a "yes" or "no" answer to the question of whether two terms s and t are unifiable. If they are unifiable, one would like to construct a solution, i.e., a substitution that identifies s and t. Such a substitution is called a unifier of s and t. In general, a unification problem may have infinitely many solutions; e.g., f(x; y) and f(y; x) can be unified by replacing x and y by the same term s (and there are infinitely many terms available). Fortunately, the applications of unification in automated deduction do not require the computation of all unifiers. It is sufficient to consider the so-called most general unifier, i.e., a unifier such that every other unifier can be obtained by instantiation." http://www.cs.bu.edu/~snyder/publications/UnifChapter.pdf

"In mathematical logic, a Horn clause is a clause (a disjunction of literals) with at most one positive literal. They are named after the logician Alfred Horn, who first pointed out the significance of such clauses in 1951. Horn clauses pl...ay a basic role in logic programming and are important for constructive logic.

A Horn clause with exactly one positive literal is a definite clause; in universal algebra definite clauses appear as quasiidentities. A Horn clause with no positive literals is sometimes called a goal clause or query clause, especially in logic programming. A dual-Horn clause is a clause with at most one negative literal. A Horn formula is a string of existential or universal quantifiers followed by a conjunction of Horn clauses; that is, a prenex normal form formula whose matrix is in conjunctive normal form with all clauses Horn."
http://en.wikipedia.org/wiki/Horn_clause

‎"Substitutability is a principle in object-oriented programming. It states that, if S is a subtype of T, then objects of type T in a computer program may be replaced with objects of type S (i.e., objects of type S may be substituted for objects of type T), without altering any of the desirable properties of that program (correctness, task performed, etc.). More formally, the Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called (strong) behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address entitled Data abstraction and hierarchy. It is a semantic rather than merely syntactic relation because it intends to guarantee semantic interoperability of types in a hierarchy, object types in particular. Liskov and Jeannette Wing formulated the principle succinctly in a 1994 paper as follows:

Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T.

In the same paper, Liskov and Wing detailed their notion of behavioral subtyping in an extension of Hoare logic, which bears a certain resemblance with Bertrand Meyer's Design by Contract in that it considers the interaction of subtyping with pre- and postconditions."
http://en.wikipedia.org/wiki/Liskov_substitution_principle

‎"In computer science, explicit substitution is any of several calculi based on the lambda calculus that pay special attention to the formalization of the process of substitution. The concept of explicit substitutions has become notorious (despite a large number of published calculi of explicit substitutions in the literature with quite different characteristics) because the notion often turns up (implicitly and explicitly) in formal descriptions and implementation of all the mathematical forms of substitution involving variables such as in abstract machines, predicate logic, and symbolic computation."
http://en.wikipedia.org/wiki/Explicit_substitution

"Hawking writes that under model-dependent realism, there is no single model that can explain the universe. There are, instead, a series of models that overlap. That’s fine, he says, provided that where the models overlap, they make the same predictions. If they don’t, one or more of the models is flawed. Hence Newtonian and General Relativistic models can account for big objects in the world, and quantum mechanical models can account for little things like subatomic particles.

The tricky part, it seems, is where these models overlap. And so far there is no model that accounts for the overlap of quantum mechanics and general relativity, for example. This would seem to be a bit of a sticking point if, as Hawking suggests, we are on the cusp of obtaining the scientific holy grail: the theory of everything."
http://www.galilean-library.org/site/index.php/page/index.html/_/reviews/the-grand-design-by-stephen-hawking-r124

"If model-dependent realism were taken to its nth degree, we could never actually say that the Copernican model is better than, or superior to, or more closely matches reality than the Ptolemaic model. Hawking and Mlodinow would surely agree, because they argue that a model is good if it meets four criteria:

-Is elegant
-Contains few arbitrary or adjustable elements
-Agrees with and explains all existing observations
-Makes detailed predictions about future observations that can disprove or falsify the model if they are not borne out

As a historian of science, I conclude that, in fact, nearly all scientific models — indeed, belief models of all sorts — can be parsed in such a manner and, in time, found to be better or worse than other models. In the long run, we discard some models and keep others based on their validity, reliability, predictability, and perceived match to reality. Yes, even though there is no Archimedean point outside of our brains, I believe there is a real reality, and that we can come close to knowing it through the lens of science — despite the indelible imperfection of our brains, our models, and our theories.

Such is the nature of science, which is what sets it apart from all other knowledge traditions."
http://www.bigquestionsonline.com/columns/michael-shermer/stephen-hawking%E2%80%99s-radical-philosophy-of-science

‎"What do I care about the physical world? Only the symmetry groups and path integrals and branes underlying its dynamics concern me, particularly insofar as they relate homomorphisms to the algebras underlying reflexive cognition."
http://www.goertzel.org/EdgeOfTheBleedingAbyss.pdf

"(Quick note for those who don't know: the Singularity Institute for AI is not affiliated with Singularity University, though there are some overlaps ... Ray Kurzweil is an Advisor to the former and the founder of the latter; and I am an Advisor to both.)"
http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

Probably inspired by research along these lines:

Feynman Processes and Information Dynamics:

Part I: The Digital World Theory
1. Methodology and Basic Principles
The DWT designates a common theoretical framework for modeling complex non-linear systems and their interactions or communications [M, SM], from elementary particles and their fundamental interactions to the modeling of living organisms and their environment.

It is “A New Kind of Science” [Wolfram]:

- New methodology. A top-down design of the theory is used, starting from the analysis of the present state-of-the-art level in science, and with an ambitious goal as a target, since we cannot understand the parts in isolation, due to the interdependency of the fundamental questions.

- A new fundamental principle unifying energy and information is introduced, as an
“upgrade” of Einstein’s equivalence principle between energy and matter, preparing us
for breaching the ultimate frontier: the Mind-Matter Interface;

- A new physics content is provided: interactions, quantum or classical, are
communications. The distinction between “system” (matter/apparatus) and “observer”
(living organism / conscience) is a matter of nuances (info processing capability), not of
principle.

- A new mathematics implementation of the physics interface: a discrete algebraic
structure is introduced, the Quantum Dot Resolution with Internal/External Duality (QDR), prone for computer simulations, abandoning the classical continuum “ballast”
(conceptual and computational).

It is a professional theoretical foundation, developed based on its specialized precursors: cellular automata, representations of categories, networks etc..
http://virequest.com/ISUP/FPID-ProjDesc.pdf

Oxford Quantum Talks Archive:
http://www.comlab.ox.ac.uk/quantum/content/speakers.html

‎"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny... a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the checkerboard with all its apparent complexities."

-Richard Feynman, The Character of Physical Law
http://www.keckfutures.org/conferences/complex-systems.html