Thursday, February 24, 2011

Construction Grammar, Optimality Theory, Harmonic Interaction, Constraint Satisfaction, Situation Semantics, Neutrosophy, Telesophy, Koinosophy

"• Grammar is the system of a language... It’s important to think of grammar as
something that can help you, like a friend.

• Grammar tells the users of a language what choices are possible for the word order of
any sentence to allow that sentence to be clear and sensible - that is, to be
unambiguous.

• ..."prescriptive grammar," a set of "rules" governing the choice of who and whom, the
use of ain’t, and other such matters. Promoted by Jonathan Swift and other literary
figures of the 18th century, this approach to language prescribes the "correct" way to
use language.

• ..."descriptive grammar," which is the study of the ways humans use
systems–particularly syntax and morphology–to communicate
through language.

• Therefore grammar acts as his tool to create meaning."
http://inst.eecs.berkeley.edu/~cs182/sp07/notes/lecture23.gr1.pdf

"At the heart of what shapes Construction Grammar is the following question: what do speakers of a given language have to know and what can they ‘figure out’ on the basis of that knowledge, in order for them to use their language successfully? The appeal of Construction Grammar as a holistic and usage-based framework lies in its commitment to treat all types of expressions as equally central to capturing grammatical patterning (i.e. without assuming that certain forms are more ‘basic’ than others) and in viewing all dimensions of language (syntax, semantics, pragmatics, discourse, morphology, phonology, prosody) as equal contributors to shaping linguistic expressions.

Construction Grammar has now developed into a mature framework, with an established architecture and representation formalism as well as solid cognitive and functional grounding. It is a constraint-based, generative, non-derivational, mono-stratal grammatical model, committed to incorporating the cognitive and interactional foundations of language. It is also inherently tied to a particular model of the ‘semantics of understanding’, known as Frame Semantics, which offers a way of structuring and representing meaning while taking into account the relationship between lexical meaning and grammatical patterning.

The trademark characteristic of Construction Grammar as originally developed consists in the insight that language is a repertoire of more or less complex patterns – CONSTRUCTIONS – that integrate form and meaning in conventionalized and in some aspects non-compositional ways. Form in constructions may refer to any combination of syntactic, morphological, or prosodic patterns and meaning is understood in a broad sense that includes lexical semantics, pragmatics, and discourse structure. A grammar in this view consists of intricate networks of overlapping and complementary patterns that serve as ‘blueprints’ for encoding and decoding linguistic expressions of all types."
http://www.icsi.berkeley.edu/~kay/

"Historically, the notion of construction grammar developed out of the ideas of "global rules" and "transderivational rules" in generative semantics, together with the generative semantic idea of a grammar as a constraint satisfaction system. With the publication of George Lakoff's "Syntactic Amalgams" paper in 1974 (Chicago Linguistics Society, 1974), the idea of transformational derivation became untenable.

CxG was spurred on by the development of Cognitive Semantics, beginning in 1975 and extending through the 1980s. Lakoff's 1977 paper, Linguistic Gestalts (Chicago Linguistic Society, 1977) was an early version of CxG, arguing that the meaning of the whole was not a compositional function of the meaning of the parts put together locally. Instead, he suggested, constructions themselves must have meanings.

CxG was developed in the 1980s by linguists such as Charles Fillmore, Paul Kay, and George Lakoff. CxG was developed in order to handle cases that intrinsically went beyond the capacity of generative grammar.

The earliest study was "There-Constructions," which appeared as Case Study 3 in George Lakoff's Women, Fire, and Dangerous Things.[1] It argued that the meaning of the whole was not a function of the meanings of the parts, that odd grammatical properties of Deictic There-constructions followed from the pragmatic meaning of the construction, and that variations on the central construction could be seen as simple extensions using form-meaning pairs of the central construction.

Fillmore et al.'s (1988) paper on the English let alone construction was a second classic. These two papers propelled cognitive linguists into the study of CxG."
http://en.wikipedia.org/wiki/Construction_grammar

"The notion construction has become indispensable in present-day linguistics and in language studies in general. This volume extends the traditional domain of Construction Grammar (CxG) in several directions, all with a cognitive basis. Addressing a number of issues (such as coercion, discourse patterning, language change), the contributions show how CxG must be part and parcel of cognitively oriented studies of language, including language universals. The volume also gives informative accounts of how the notion construction is developed in approaches that are conceptually close to, and relatively compatible with, CxG: Conceptual Semantics, Word Grammar, Cognitive Grammar, Embodied Construction Grammar, and Radical Construction Grammar."
http://tinyurl.com/45g4qa3

OPTIMALITY THEORY
Constraint Interaction in Generative Grammar

"Everything is possible but not
everything is permitted." -Richard Howard, "The Victor Vanquished"

"It is demonstrated," he said, "that things cannot be otherwise: for, since everything was made for a purpose, everything is necessarily made for the best purpose." -Candide ou l'optimisme. Ch. I.

The basic idea we will explore is that Universal Grammar consists largely of a set of constraints on representational well-formedness, out of which individual grammars are constructed. The representational system we employ, using ideas introduced into generative phonology in the 1970's and 1980's, will be rich enough to support two fundamental classes of constraints: those that assess output configurations per se and those responsible for maintaining the faithful preservation of underlying structures in the output. Departing from the usual view, we do not assume that the constraints in a grammar are mutually consistent, each true of the observable surface or of some level of representation. On the contrary: we assert that the constraints operating in a particular language are highly conflicting and make sharply contrary claims about the well-formedness of most representations. The grammar consists of the constraints together with a general means of resolving
their conflicts. We argue further that this conception is an essential prerequisite for a substantive theory of UG.

It follows that many of the conditions which define a particular grammar are, of necessity, frequently violated in the actual forms of the language. The licit analyses are those which satisfy the conflicting constraint set as well as possible; they constitute the optimal analyses of underlying forms. This, then, is a theory of optimality with respect to a grammatical system rather than of wellformedness
with respect to isolated individual constraints.

The heart of the proposal is a means for precisely determining which analysis of an input best satisfies (or least violates) a set of conflicting conditions. For most inputs, it will be the case that every possible analysis violates many constraints. The grammar rates all these analyses according to how well they satisfy the whole constraint set and produces the analysis at the top of this list as the output. This is the optimal analysis of the given input, and the one assigned to that input by the
grammar. The grammatically well-formed structures are those that are optimal in this sense.

How does a grammar determine which analysis of a given input best satisfies a set of
inconsistent well-formedness conditions? Optimality Theory relies on a conceptually simple but surprisingly rich notion of constraint interaction whereby the satisfaction of one constraint can be designated to take absolute priority over the satisfaction of another. The means that a grammar uses to resolve conflicts is to rank constraints in a strict dominance hierarchy. Each constraint has absolute priority over all the constraints lower in the hierarchy.

Such prioritizing is in fact found with surprising frequency in the literature, typically as a subsidiary remark in the presentation of complex constraints.1 We will show that once the notion of constraint-precedence is brought in from the periphery and foregrounded, it reveals itself to be of remarkably wide generality, the formal engine driving many grammatical interactions. It will follow that much that has been attributed to narrowly specific constructional rules or to highly particularized conditions is actually the responsibility of very general well-formedness constraints. In addition, a diversity of effects, previously understood in terms of the triggering or blocking of rules by constraints (or merely by special conditions), will be seen to emerge from constraint interaction.

Although we do not draw on the formal tools of connectionism in constructing Optimality Theory, we will establish a high-level conceptual rapport between the mode of functioning of grammars and that of certain kinds of connectionist networks: what Smolensky (1983, 1986) has called "Harmony maximization", the passage to an output state with the maximal attainable consistency between constraints bearing on a given input, where the level of consistency is determined exactly by a measure derived from statistical physics. The degree to which a possible analysis of an input satisfies a set of conflicting well-formedness constraints will be referred to as the Harmony of that analysis. We thereby respect the absoluteness of the term "well-formed", avoiding terminological confusion and at the same time emphasizing the abstract relation between Optimality Theory and Harmony-theoretic network analysis. In these terms, a grammar is precisely a means of determining which of a pair of structural descriptions is more harmonic. Via pair-wise comparison of alternative analyses, the grammar imposes a harmonic order on the entire set of possible analyses of a given underlying form. The actual output is the most harmonic analysis of all, the optimal one. A structural description is well-formed if and only if the grammar determines it to be the optimal analysis of the corresponding underlying form.

With an improved understanding of constraint interaction, a far more ambitious goal becomes accessible: to build individual phonologies directly from universal principles of well-formedness. (This is clearly impossible if we imagine that constraints must be surface- or at least level-true.) The goal is to attain a significant increase in the predictiveness and explanatory force of grammatical theory. The conception we pursue can be stated, in its purest form, as follows: Universal Grammar provides a set of highly general constraints. These often conflicting constraints are all operative in individual languages. Languages differ primarily in how they resolve the conflicts: in the way they rank these universal constraints in strict domination hierarchies that determine the circumstances under which constraints are violated. A language-particular grammar is a means of resolving the conflicts among universal constraints.

On this view, Universal Grammar provides not only the formal mechanisms for constructing particular grammars, it also provides the very substance that grammars are built from. Although we shall be entirely concerned in this work with phonology and morphology, we note the implications for syntax and semantics."
http://roa.rutgers.edu/files/537-0802/537-0802-PRINCE-0-0.PDF

The Optimality Theory ~ Harmonic Grammar Connection:
http://uit.no/getfile.php?PageId=874&FileId=187

Can Connectionism Contribute to Syntax? Harmonic Grammar, with an Application
http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-485-90.pdf

Integrating Connectionist and Symbolic Computation for the Theory of Language:
http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-628-92.pdf

Searle, Subsymbolic Functionalism and Synthetic Intelligence:
http://view.samurajdata.se/psview.php?id=8fa2ad05&page=1

Subsymbolic Computation and the Chinese Room:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.6034&rep=rep1&type=pdf

Subsymbolic Computation:
http://consc.net/mindpapers/6.3e

Experiments with Subsymbolic Action Planning with Mobile Robots:
http://cswww.essex.ac.uk/staff/udfn/ftp/aisb03pi.pdf

"Symbolic AI involves programs that represent knowledge, relationships etc. explicitly as a set of symbols. In a computer program, this means a set of variables and functions. These variables have names and their purpose is clear. Each variable represents a particular pieces of knowledge, and each function represents a particular inference rule.

For instance, here is a rule extracted from an expert system:

IF blood_sugar_level > 20 AND gram_negative = TRUE
   THEN disease = "STREPTOCOCCUS"
A program that used this rule would have to have three variables, one called blood_sugar_level, one called gram_negative and one called disease. Even someone with no medical knowledge can see what these variables represent, and they can understand the rule fairly easily.

Sub-symbolic AI involves modelling intelligence not with variables that represent particular pieces of knowledge. Instead, they simulate small areas of the brain - with variables representing brain cells. These cells are "connected" inside the program so that they can pass their signals from one brain cell to another. These connections can be strong or weak, so that a large part of the signal is passed on, or only a small part.
The knowledge is stored in the strength of these connections, so that some inputs have a large effect on the activity within the program and some have only a small effect. However, the individual connections do not represent individual pieces of knowledge. Instead, the knowledge is spread out over all the connections - each connection doesn't do much on its own, but together they produce an intelligent result.

A good analogy for sub-symbolic AI is that of an anthill. The individual ants are rather stupid. They have small brains and are controlled mainly by chemical signals. However, the colony of ants as a whole seems to have some "group intelligence" that arises out the ants working together. The whole is more than the sum of the parts.

Which is better?

Each approach has its advantages and disadvantages.
Symbolic AI is easy to understand. You can immediately see how rules are translated into program statements. Sub-Symbolic AI is a lot less obvious. It is impossible to take part of the program and say "Ah, this part represents such-and-such a rule."

However, symbolic AI only works if the knowledge that you are trying to capture can be put into rules and named variables. Imagine how hard it would be to set up a series of rules that could distinguish between pictures of male faces and pictures of female faces. Sub-symbolic AI is programming without rules - the connections inside the program evolve under their own control until the program has learned the knowledge.

Sub-symbolic AI is often called Neural Networks, or Connectionism, and is dealt with in greater detail in another section of this tutorial.

Of course, there is no reason why an intelligent system should be exclusively symbolic or sub-symbolic. The parts of the program that work better using sub-symbolic AI can be implemented using neural networks, and the results that they produce can be processed by symbolic AI systems. In this way, we get the best of both worlds!"
http://richardbowles.tripod.com/ai_guide/intro.htm#symbol

Optimality Theory from a cognitive science perspective:
http://www.deepdyve.com/lp/de-gruyter/optimality-theory-from-a-cognitive-science-perspective-iXtzWfSYty

"Situation theory is an information theoretic mathematical ontology developed to support situation semantics, an alternative semantics to the better known possible world semantics originally introduced in the 1950s. Rather than a semantics based on total possible worlds, situation semantics is a relational semantics of partial worlds called situations. Situations support (or fail to support) items of information, variously called states of affairs or infons. The partial nature of situations gives situation theory and situation semantics a flexible framework in which to model information and the context-dependent meaning of sentences in natural language."
http://jacoblee.net/occamseraser/2010/10/03/introduction-to-situation-theory-part-1/

Information Retrieval and Situation Theory

“In the beginning there was information. The word came later.” Fred I. Dretske

The explosive growth of information has made it a matter of survival for companies to have at their disposal good information retrieval tools. Information storage is becoming cheaper, and because of this more voluminous. Due to this fact a growing amount of (expensive) information disappears unused (or unread) simply because there are no possibilities to retrieve this information effectively. The problem, however, has been studied for years, as what has become known as ”the information retrieval problem” and can be described as follows:

“In which way relevant information can be distinguished from irrelevant information corresponding a certain information need”.

Systems trying to solve this problem automatically are called information retrieval (IR) systems. These systems which are developed from a defined model try to furnish a solution for the problem. There are various models of IR systems; the most publicized ones are the Boolean, the Vector Space [1], and the Probabilistic models [2, 3]. More recently, van Rijsbergen [4] suggested a model of an IR system based on logic because the use of an adequate logic provides all the necessary concepts to embody the different functions of an IR system."
http://www.cs.uu.nl/research/techreps/repo/CS-1996/1996-04.pdf

"Formal concept analysis refers to both an unsupervised machine learning technique and, more broadly, a method of data analysis. The approach takes as input a matrix specifying a set of objects and the properties thereof, called attributes, and finds both all the "natural" clusters of attributes and all the "natural" clusters of objects in the input data, where

a "natural" object cluster is the set of all objects that share a common subset of attributes, and

a "natural" property cluster is the set of all attributes shared by one of the natural object clusters.

Natural property clusters correspond one-for-one with natural object clusters, and a concept is a pair containing both a natural property cluster and its corresponding natural object cluster. The family of these concepts obeys the mathematical axioms defining a lattice, and is called a concept lattice (in French this is called a Treillis de Galois because the relation between the sets of concepts and attributes is a Galois connection)."
http://en.wikipedia.org/wiki/Formal_concept_analysis

Using Situation Lattices to Model and Reason about Context

"Much recent research has focused on using situations rather than individual pieces of context as a means to trigger adaptive system behaviour. While current research on situations emphasises their representation and composition, they do not provide an approach on how to organise and identify their occurrences efficiently. This paper describes how lattice theory can be utilised to organise situations, which reflects the internal structure of situations such as generalisation and dependence.
We claim that situation lattices will prove beneficial in identifying situations,
and maintaining the consistency and integrity of situations. They will also help in resolving the uncertainty issues inherent in context and situations by working with Bayesian Networks."
http://www.simondobson.org/softcopy/mrc-lattices-07.pdf

A Categorial View at Generalized Concept Lattices: Generalized Chu Spaces
http://cla.inf.upol.cz/papers/cla2006/short/paper6.pdf

Using Hypersets to Represent Semistructured Data
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.4993

Integrating Information on the Semantic Web Using Partially Ordered Multi Hypersets

"The semantic web is supposed to be a global, machine-readable information re-pository, but there is no agreement on a single common information metamodel. To prevent the fragmentation of this nascent semantic web, this thesis introduces the expressive and flexible Braque metamodel. Based on non-well-founded par-tially ordered multisets (hyper pomsets), augment with a powerful reflection mechanism, the metamodel supports the automated, lossless, semantically trans-parent integration of relationship-based metamodels. Complete mappings to Braque from the Extensible Markup Language (XML), the Resource Description Framework (RDF), and the Topic Maps standard are provided. The mappings place information at the same semantic level, allowing uniform navigation and queries without regard for the original model boundaries."
http://www.ideanest.com/braque/Thesis-web.pdf

A Logic of Complex Values

"Our world is run by a logic that has no room for values, by a scientific methodology that disdains the very notion. In this paper we try to redress the balance, extracting many modern scientific findings and forms of philosophical reasoning from the field of complex systems, to show that values can and should be made part of an enhanced normative logic derived from Neutrosophy. This can then be employed to quantitatively evaluate our beliefs based on their dynamic effects on a full set of human values."

Keywords and Phrases: Complex Systems, Axiology, Neutrosophic Logic, Intrinsic Values, Synergy, Dynamical Fitness, Attractors, Connectivity, Holarchy, Teleology, Agents
http://www.calresco.org/lucas/logic.htm

Intentionally and Unintentionally. On Both, A and Non-A, in Neutrosophy

"The paper presents a fresh new start on the neutrality of neutrosophy in that "both A and Non-A" as an alternative to describe Neuter-A in that we conceptualize things in both intentional and unintentional background. This unity of opposites constitutes both objective world and subjective world. The whole induction of such argument is based on the intensive study on Buddhism and Daoism including I-ching. In addition, a framework of contradiction oriented learning philosophy inspired from the Later Trigrams of King Wen in I-ching is meanwhile presented. It is shown that although A and Non-A are logically inconsistent, but they are philosophically consistent in the sense that Non-A can be the unintentionally instead of negation that leads to confusion. It is also shown that Buddhism and Daoism play an important role in neutrosophy, and should be extended in the way of neutrosophy to all sciences according to the original intention of neutrosophy.
...
Neutrosophy is a new branch of philosophy that studies the origin, nature, and scope
of neutralities, as well as their interactions with different ideational spectra.
It is the base of neutrosophic logic, a multiple value logic that generalizes the fuzzy logic and deals with paradoxes, contradictions, antitheses, antinomies."
http://arxiv.org/abs/math.GM/0201009

Neutrosophy

"This paper is a part of a National Science Foundation interdisciplinary project proposal and introduces a new viewpoint in philosophy, which helps to the generalization of classical 'probability theory', 'fuzzy set' and 'fuzzy logic' to
, and respectively.
One studies connections between mathematics and philosophy, and mathematics and other disciplines as well (psychology, sociology, economics)."
http://arxiv.org/ftp/math/papers/0010/0010099.pdf

PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON NEUTROSOPHY, NEUTROSOPHIC LOGIC, NEUTROSOPHIC SET, NEUTROSOPHIC PROBABILITY AND STATISTICS
http://fs.gallup.unm.edu//neutrosophicProceedings.pdf

Neutrosophy in Situation Analysis
http://www.fusion2004.foi.se/papers/IF04-0400.pdf

Comments to Neutrosophy
http://arxiv.org/ftp/math/papers/0111/0111237.pdf

TELESOPHY: A SYSTEM FOR MANIPULATING THE KNOWLEDGE OF A COMMUNITY

"A telesophy system provides a uniform set of commands to manipulate knowledge within a community. Telesophy literally means "wisdom at a distance". Such a system can effectively handle large amounts of information over a network by making transparent the data type and physical location. This is accomplished by supporting a uniform information space of units of information. Users can browse through this space, filtering and grouping together related information units, then share these with other members of the community."
http://www.canis.uiuc.edu/projects/telesophy/telesophy002.html

Towards Telesophy: Federating All the World's Knowledge

"The Net is the global network, which enables users worldwide to interact with information. As new technologies mature, the functions of the protocols deepen, moving closer to cyberspace visions of "being one with all the world's knowledge". The Evolution of the Net has already proceeded from data transmission in the Internet to information retrieval in the Web. The global protocols are evolving towards knowledge navigation in the Interspace, moving from syntax to semantics. In the future, infrastructure will support analysis, for interactive correlations across knowledge sources. This moves closer towards "telesophy"."
http://krustelkram.blogspot.com/2009/03/towards-telesophy-federating-all-world.html

"The final quadrant’s emphasis is on maturing society’s shared lesson set to yield a coherent sense of the whole. It seeks to integrate the disparate knowledge contained in its members’ individual lesson sets, to reflect self-critically on diverse experiences and search for unifying principles that give guidance over time and in a wide variety of circumstances. It is a society that grows in collective wisdom. I call this the “koinosophic society,” coined from the Greek “koinos,” or “common,’ and “Sophia,” for “wisdom.”"
http://www.learndev.org/dl/VS3-00g-LearnSocMultDim.PDF

A Holoinformational Model of Consciousness
http://www.quantumbiosystems.org/admin/files/QBS3%20207-220.pdf