Here is the final installment of our coverage of Monica Anderson's Artificial Intuition (AN).
See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES
See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES
The point to artificial intelligence is still in debate. Most view the Turing Test as the ultimate definition of AI. If the AI can fool us into thinking it to have consciousness and intelligence then it has achieved its purpose. Not all agree with the Turing Test as the ultimate definition. They have a point. The ultimate point of AI systems is not to fool us, but to like us, be able to predict future events and situations like we do all the time. This point has been well presented and defended by Jeff Hawkins, founder of Palm Computing. We include a short presentation he made in 2003 at TED. If you cannot see the embedded video, here is the link: http://bit.ly/4AwRpg.
Logic & Intuition: Power and Limits
Monica Anderson goes on to explain the differences between logic and intuition. She does not of course disown logic, but explains that it has its limitations in AI. The benefits to logic are many, they can be used to make long term predictions, even high precision predictions. Logical methods are productive and have been for centuries producing new knowledge from the "...mechanical manipulation..." of its formulas. In regards to the limitations of logic she states,
One of the most surprising limits is that they require Theory, i.e. a high level model of the problem domain. This is surprising only because many cannot fathom that this is a limitation, since they believe doing anything without a solid Theory is impossible. But, as we shall see, Intuition requires no Theory or Logic based models. So this is in fact a limitation on Logic.Logic requires an idealized abstract world, which is far from the real world we inhabit.
Anyone who has opened a textbook on Physics or Mechanics has time and again encountered the phrase "All else being constant...". This is how Physics avoids problems with Systems that require a Holistic Stance, for instance any system that is constantly adapting to an environment that it cannot be separated from.Another limitation of logic is that it is simple in a complex world.
It is true that some formulas can run to multiple pages, but this would still be simple compared to the complexity we discover in nature.But where logic utterly fails it when it covers "...Bizarre Systems..." which in Anderson's opinion makes it unsuitable to handle important problems in the life sciences, or everyday problems in the discovery of semantics, especially languages.
Albert Michotte |
...that causes are a strange kind of knowledge. This was first pointed out by David Hume, the 18th-century Scottish philosopher. Hume realized that, although people talk about causes as if they are real facts—tangible things that can be discovered—they’re actually not at all factual. Instead, Hume said, every cause is just a slippery story, a catchy conjecture, a “lively conception produced by habit.” When an apple falls from a tree, the cause is obvious: gravity. Hume’s skeptical insight was that we don’t see gravity—we see only an object tugged toward the earth. We look at X and then at Y, and invent a story about what happened in between. We can measure facts, but a cause is not a fact—it’s a fiction that helps us make sense of facts.The experiments of Albert Michotte in the early 20th century demonstrated the way in which the mind can easily create causation. Lehrer discusses this in more detail in his Wired article previously cited.
"I wish more than anything. But I can't imagine you with all your complexity, all you perfection, all your imperfection. Look at you. You are just a shade of my real wife. You're the best I can do; but I'm sorry, you are just not good enough." Mr. Cobb, Inception
So where does this leave us? Do we then turn to intuition? Anderson does not think intuition is the sole answer either. Although "wisdom" is gathered through an accumulation of intuitive experiences in life, it cannot perform long term predictions. But Anderson speaks about the advantages of intuition. It is fast. It functions well in death and life decisions and increases our chances for survival. Intuition requires no theory or high-level logic model so it avoids the paradox of the bootstrapping problem in AI. Paul F.M.J. Verschure in an essay titled, The Cognitive Development Of An Autonomous Behaving Artifact: The Self-Organization of Categorization, Sequencing, and Chunking, further elaborates on this bootstrapping problem,The major issue is that any alternative approach towards understanding mind, brain and behavior has to explain the genesis of the rules and representations, a priority provided by the designer in a computational approach, without alluding to a designer. This can be see as a bootstrapping problem, how can a behaving system acquire "higher" levels of representation through learning?Another advantage of intuition based systems is that they are biologically plausible. Her reasoning is that "...a considerably larger amount of mechanism would be required before Logic could be used to improve predictions." This last point is in response, of course, to the poor ability of present computer models to predict biological or geological events out further in time beyond the more immediate future.
But according to Anderson there are limitations in intuitive systems. Intuitive systems learn. They require prior experience. In a novel situation, this system must compare things to the closest previous experience. This, to Anderson makes this system error prone. Like logic systems, long term predictions are problematic. Intuitive systems cannot generate new knowledge by "...a mechanical manipulation the existing theory since there is no such thing as 'theory.'"
AN (Artificial Intuition): Anderson's Solution To the Artificial Intelligence Problem
If you cannot see the embedded video above, here is the link: http://vimeo.com/7818449. Anderson set forth some graphics to demonstrate certain key elements of how science needs this new approach to artificial intelligence.
via: Syntience |
This chart shows how current scientific models are able to handle complex systems. The area in red show where it is least competent to predict future events.
via: Syntience |
This second chart shows the areas in which present science can predict the outcomes, by both the type of method it might use and the range of the prediction (long term, short term).
The next chart shows how in Anderson's opinion, Brittle AI and GOFAI (good old artificial intelligence). Brittle AI refers more to the software running it. Wikipedia explains:
When software is new, it is very malleable; it can be formed to be whatever is wanted by the implementers. But as the software in a given project grows larger and larger, and develops a larger base of users with long experience with the software, it becomes less and less malleable. Like a metal that has been work-hardened, the software becomes a legacy system, brittle and unable to be easily maintained without fracturing the entire system.As for Good Old Artificial Intelligence we call again upon wikipedia for a good explanation:
In artificial intelligence research, GOFAI ("Good Old-Fashioned Artificial Intelligence") describes the oldest original approach to achieving artificial intelligence, based on logic and problem solving. In robotics research, the term is extended as GOFAIR ("Good Old-Fashioned Artificial Intelligence and Robotics"). GOFAI was the dominant paradigm of AI research from the middle fifties until the late 1980s. After that time, newer sub-symbolic approaches to AI were introduced. The term "GOFAI" was coined by John Haugeland in his 1986 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research.
via: Syntience |
Anderson's approach AN (artificial intuition) seems to work better than the other two systems of AI:
via: Syntience |
While Monica Anderson is working on this novel and exciting approach, one wonders if once things are moved to algorithms, if some sort of computer model is involved. The question of course is that all computer models are in their core mathematical, and therefore an abstraction of reality, a simplification of reality. And it seems to us that at these very points suffers from the problems of the predictions of all complex systems. Nevertheless, we encourage this new approach by Anderson and wish her great luck in her hard work.
The issue of complex systems has to be solved if Science is to advance to the real questions that are still unsolved. When we, thinking of the difference between the real thing and the abstraction of it, that we as humans create, we are reminded of the great line from the movie Inception, which the character Mr. Cobb utters when speaking to the model he constructed of his wife in his own dream and realizing the failure of that model's ability to present the real person:
I wish more than anything. But I can't imagine you with all your complexity, all you perfection, all your imperfection. Look at you. You are just a shade of my real wife. You're the best I can do; but I'm sorry, you are just not good enough.
No comments:
Post a Comment