Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Artificial intelligence meets good old-fashioned human thought
Science News ^ | Aug. 30, 2003 | Bruce Bower

Posted on 09/03/2003 12:02:02 PM PDT by js1138

click here to read article


Navigation: use the links below to view more comments.
first previous 1-2021-30 last
To: Physicist
I have always wanted a math coprocessor for my brain.

Yeah, me too. It would be a good mix of capabilities.

21 posted on 09/03/2003 1:46:27 PM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 19 | View Replies]

To: js1138
bmp
22 posted on 09/03/2003 3:11:26 PM PDT by ImaGraftedBranch (Education starts in the home. Education stops in the public schools)
[ Post Reply | Private Reply | To 2 | View Replies]

To: js1138
a cognitive prosthesis magnifies strengths in human intellect

The greatest strength in human intellect is to juggle four concepts at once. That is, we can consider a question from the aspect of rationality, aesthetics, morality, and . . . uh . . . something else at once.

23 posted on 09/03/2003 3:17:32 PM PDT by RightWhale (Repeal the Law of the Excluded Middle)
[ Post Reply | Private Reply | To 1 | View Replies]

To: js1138; tortoise
Okay, I will start with some bold assertions - just to stir the pot - and we can refine from there:

The traditional AI view was that 'thinking' is like computing. It is *like* computing but is is not the same thing at all! The Turing Test itself allowed/elided that key difference. Probably because "intelligene=computing" was understaood and beleived and they didnt want to bother with the other part of human 'thinking' aka emotions. they were deemed of no importance. and Searle's "Chinese Room" experiement exposed the error, ie, it exposed the flaw in cofusing processing with *being intelligent in an aware sense*, but there has been no resolution on that matter.

The problem was and is that traditional AI aimed at something useful, but claimed it to be something other than what it was. You can string together a billion super-fast processors and analyze the world's climate to the Nth degree, but that wont be any closer to being "intelligent" in a human sense than the microcontroller in a washing machine. Let me clear on definitions - 'intelligent' requires not just the ability to process, but some concept of awareness. " 'I am' I said, therefore I 'think'."
If we define 'intelligence' to be processing, making decisions based on input, then "Machine Intelligence" is here and quite well. But such AI is *not* the bill of goods sold by the early AI innovators. Did they oversell themselves as having insight into "Cognitice Science"?

(This is where the 'rebel' AI guy is right; AI that serves as a tool is what practical AI should be about; the 'other' AI, the traditional AI that tries to create/define the frankenstein of proto-human thought on silicon, is a different task entirely. The former is useful, the latter more profound and far more difficult than the boosters first claimed.)

The other thing is that folks like Noam Chomsky were wrong (I love saying that 'caus I hate his marxist a**), but his grea influence on linguistics has become a dangerous cult as bad as his nemesis behavioralism was in the 1950s. Just the other extreme - postulating internal brain structures that behavioralists insisted couldnt be there because they couldnt be scientifically studied, he put 'em there anyway because language processing didnt make sense without an inner mentality about it. He wrongly put linguistics and symbolic/syntax processing at the center of the equation. In fact neurology and biology are telling us that the kind of processing this entails is not how neurons work; yet he persisted/persists in pushing this "universal grammar" as the model not of how it could work, but how it *does* work in humans, without the evidence and contrary to much evidence since. (BTW, FR thread on this in recent months.)

I read Howard Gardner's "The Mind's New Science" during the summer. This was written in 1983, and ytou could tell that despite the 'innovations' and excitement of this survey of the previous 30 years, there was something quite not right. He tried to gloss over it but much of his chapter summaries hemmed and hawed "although we've now seen XX's work as incomplete, it shed great light on ...". He also gagged me with his central palce given to Chomsky.

Ad yet the real shocker to me was how *little* Cognitive Science really rested on what it should rest on: The biology of the brain.

There is a good book by Allan Hobson "The chemisty of Unconscious States" (hope I got the title right). It goes into the brains cycles of operation. It make PERFECTLY CLEAR that brains rely not just on neural signals, but the bath of chemicals in the brain to trigger states. In other words, the brain is wonderfully more complex than the simplistic (neuron = transistor) model. Imagine, every day you go from psychotic (dreaming) to sensing real-world events and back again, with the same "hardware". The comparable things in IC/silicon may be different voltage levels or the SOI 'history effect', but nothing at all like this. And while it could be 'simulated' on an AI computer, the fact is this: the human brain's slow 30 Hz massively parallel, widely diffuse, memory=logic, low power, template-based and pattern-based cognition, is *nothing like* a super-fast super-pipelined 3 GHz Von Neumann separate processor/memory, separate instruction/data stream, high power, clocked sequential silicon process

Simulation is something, but it is not being real.
To simulate the processing of thoughts is to process thoughts; but to simulate consciousness is not to be
conscious.
And dont they say "The map is not the territory"?


The things that were most resisted were the exact things needed - a thorough grounding in the neurobiology of the brain as the only and surest REAL model of what human cognition, consciousness and intelligence is all about.
Slowly, but surely this real bottom-up science is happening, and not a moment too soon to clear the decks of phony ideas.

Freud by now has also been thoroughly debunked. His mind's model is a fraud and lives on only in the corners of academia where facts matter less than ideological fixations (eg women's studies).

And here is the punchline: Intelligence is not the same as consciousness. Consciousness is a particular *type* of intelligence, a type that humans happen to have. And a type that relies heavily on innate concepts of existence and identity. What the neurologists are learning about consciousness is that it runs right through the parts of the brain processing emotions - emotions are the key to consciousness not the high-level 'thinking' - because consciousness is about having identity, which is about states of mind'. Emotions are, in my view (and this has been articulated by scientist to an extent for some time now), the precursor to language; emotions are both thoughts and communications (think about anger, rage, laughter, love, sorrow). You can communicate much without words. Now imagine thinking in non-verbal ways (you can - think 'left brain, right brain stuff), just observe the environment. These modes of thought are available to you (viz. the book "Tortoise Mind, Hare Brain") if you meditate and access the 'theta wave' brain underneath the neo-cortex's beta wave consciousness.

This 9/10ths of our brain is the rest of the iceberg that makes up human-type conscious (and un-conscious) thinking. AI will have to deal with it if it really wants to take on the task of replicating human thought. The good news is that this problem is far more difficult challenging and *interesting* than getting a computer fake out the Turing Test.
24 posted on 09/03/2003 4:20:04 PM PDT by WOSG (Lower Taxes means economic growth)
[ Post Reply | Private Reply | To 8 | View Replies]

To: js1138
This isn't the first I'd heard of this. The idea, at least in the abstract, has been around for quite some time. So my question is, why hasn't science fiction been picking up on ideas like this? Is there something about it that turns audiences off?
25 posted on 09/03/2003 5:15:56 PM PDT by inquest (We are NOT the world)
[ Post Reply | Private Reply | To 1 | View Replies]

To: WOSG
Bloody hell, man. That was a lot of post. Since you primed the pump, I'll throw out some comments on parts of it.

and Searle's "Chinese Room" experiement exposed the error, ie, it exposed the flaw in cofusing processing with *being intelligent in an aware sense*, but there has been no resolution on that matter.

Actually, Searle's "Chinese Room" is premised on a flawed definitional assumption. However, when John Searle original wrote it the flaw would not have been obvious with the state of the relevant mathematics at that time. It is not generally considered a credible argument in "hard" theoretical AI research circles because of this foundational flaw (and if those guys can agree on anything, it is probably correct).

The problem was and is that traditional AI aimed at something useful, but claimed it to be something other than what it was.

That wasn't the problem. The problem is that for half a century there was no theoretical or even definitional foundation from which specific implementations could be derived. The entire history of AI research has been a blind man with a shotgun, hoping that he'll hit something. The problem, ultimately, is that you can't solve a problem if you don't know precisely what the problem is. Yet research went on for ages without a rigorous characterization of the problem, with people trying all manner of things hoping something would stick.

Let me clear on definitions - 'intelligent' requires not just the ability to process, but some concept of awareness.

Sloppy definition. Intelligence is a measurable property of a machine (both "intrinsic" and "apparent"). Awareness (both intrinsic and apparent) is a function of the apparent intelligence of the machine, but requires substantial apparent intelligence before apparent awareness can become significant.

To define more clearly, "intrinsic" intelligence of a finite machine is the theoretical mathematical limit of intelligence on a machine of given intrinsic Kolmogorov complexity. The "apparent" intelligence is the what you get after inefficiencies in implementation are accounted for, and is usually considered in terms of the "effective" Kolmogorov complexity of the machine (i.e. assuming the machine is ideal, how big would it need to be to have the measured apparent intelligence).

Awareness is a function of intelligence and follows from there. The theoretical limitations are ultimately determined by the intrinsic Kolmogorov Complexity of the system.

the human brain's slow 30 Hz massively parallel, widely diffuse, memory=logic, low power, template-based and pattern-based cognition, is *nothing like* a super-fast super-pipelined 3 GHz Von Neumann separate processor/memory, separate instruction/data stream, high power, clocked sequential silicon process

Well, they ARE Turing equivalent, but they are pretty orthogonal models of computation. The brain is a reasonably good facsimile of a non-axiomatic differential machine model. Both silicon and wetware find certain different algorithm spaces to be intractable because of context constraints. You can run a NADM on silicon (not natively though), and it can do all of the fancy tricks that those types of models can do, and sucks at the same things that those classes of model suck at. The only REAL differences between the brain and classic silicon are time-space considerations for many classes of algorithm.

The things that were most resisted were the exact things needed - a thorough grounding in the neurobiology of the brain as the only and surest REAL model of what human cognition, consciousness and intelligence is all about.

Except that a fair percentage of the turds in AI came from doing exactly this. And to be perfectly frank, the brain is only mediocre model of its class of computer. The reason it does so well is that it has a bloody bucketload of apparent Kolmogorov complexity to work with (read: lots of RAM).

In reality, a lot of the useful stuff is coming from mathematical models of that class of machine and forward engineering the brain, rather than slicing and dicing it and trying to reverse engineer it. All we really ever needed was good foundational mathematics, and we more or less have it now (and have for a couple years). Empirical models developed by reverse engineering the brain really won't tell us all that much about the fundamental theory behind its operation.

Emotions are, in my view (and this has been articulated by scientist to an extent for some time now), the precursor to language; emotions are both thoughts and communications (think about anger, rage, laughter, love, sorrow).

That may be true, but the primary PURPOSE of emotions in biology is to bootstrap learning so that the brain becomes a useful organ by providing the initial bias in the system. One can think of them as a default goal system, though they never really go away even after the brain has bootstrapped from a blank slate. Note that emotions would be useless for an AI for most purposes. If you are manufacturing one in a lab, you don't need an internal bootstrap for biasing, and it degrades the apparent intelligence of the system unnecessarily.

The point of AI is intelligence rather than emotion. One of the flaws that made the Turing Test kind of stupid is that it only measures human-like behavior, not actual intelligence. Worse, a lot of the things that humans identify as "human-like" are not intelligent behaviors. Humans are a pretty mediocre example of an intelligent system, and people seem to forget that just because we are the smartest things running around right now does NOT mean that we are even vaguely near the theoretical optimal for a machine with our intrinsic intelligence.

Just to throw out a somewhat related point and to put it in terms of normal computers, intelligence is bound solely by the amount of RAM you have -- how fast the processor runs is irrelevant to intelligence -- and the bottleneck on current silicon is memory latency. Parallelism doesn't really matter, but most processors are incapable of doing truly random RAM access at reasonable speeds, hence why the L2 cache, but all "high-intelligence" algorithms hardly ever hit the cache when they get bigger than toy size.

26 posted on 09/03/2003 6:17:03 PM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 24 | View Replies]

To: WOSG
the brain is wonderfully more complex than the simplistic (neuron = transistor) model.

Well duh. And noone models the brain with transistors anyhow. It is all done at a higher level, programming. Programs can be unboundedly complex. The hardware of the computer is relatively simple. This doesn't constrain the software in the least.

27 posted on 09/03/2003 6:51:11 PM PDT by jlogajan
[ Post Reply | Private Reply | To 24 | View Replies]

To: tortoise
First, I am being loose on my def'ns.

"Yet research went on for ages without a rigorous characterization of the problem, with people trying all manner of things hoping something would stick."

Maybe so ... the Howard Gardner book, it became clear that for *some* the question was and is: "How does the mind work?"

Alot of the work in AI that was *really good* was simply showing that computers could be made to think and do things not done before. In this class, work on vision recognition is superb; so is theorem proving, knowledge bases etc. Yet in my field, EDA, the class of problems solved in for example formal verification is at least as 'hard' as any AI theorem-proving, yet it is not called AI. Any class of problems categorizable via optimization is solvable through rigorous means. The area where AI can and should work is in building ways of knowledge-processing and knowledge-building; but that seems to be slow. (Ever hear of Doug Lenat; wasting over 15 years building a knowledge base of 'common sense'; entered by humans manually! come know, let's try to have a software system that can suck up all the words spoken on FR - or heck the whole internet - and turn it into an ontology pronto! why is that not achievable??? )

Alot of the interesting but IMHO flawed is stuff like "Society of Mind" which postulated how human consciousness *might* happen but didnt really explain it, since it is a postulation not related to how human brains really work.

So the dividing line is this: Are we using human thinking to help us understand how to build *computers*, or are we using *computers* to help us understand *human thinking*?
My critique is that the latter excercise has been not very fruitful.

Searle's Chinese room, that a conscious symbol processor could look externally 'knowledgable' about Chinese but internally be non-conciously knowledgable is IMHO a valid red flag. I've seen critiques but they themselves were flawed enough to make me doubt it. Many mistakenly assume
that Searle implies that the mind is non-physical; if you google to Searle's own webpage you'll find that is not the case at all. Flawed argument or not, the distinction between processing and consciousness is IMHO real even if you through out any and all metaphysics and consider the mind/brain one.

If "The point of AI is intelligence rather than emotion" then my point is AI was looking under the lighted lamp-post; computing helps solve that, so they assume that is the problem to solve to crack the nut of human cognition.
The hurdle is not 'intelligence'. Not at all. Intelligence is easy. You've got enough compute power, you can simulate anything. The problem is IMHO consciousness. You cannot answer the question "how does the human mind work?" without explaining consciousness.

What I am positing is that emotion is the *reason* for human consciousness, even though evolutionarily, emotion is more "primitive" that linguistic/verbal neo-cortex-based 'human reasoning'. That *reason* has to do evolutionarily with the need for self-awareness to aid in the adjustment of sensory feedback; it is known in neuorbiology circles that a lot of consciousness is inhibitory, ie, stops you from doing something that is already encoded rather than training you to *do* something; so the conscious loop is there to make sure the unconscious "knee jerk" reaction is correct; how does consciousness support that? by having the concept of focus, boundaries and sensory selection - your conscious tunes out clutter, focusses on some event and maps it understanding/patterned behavior; there are plenty of studies that show non-conscious reaction times *even to things that have to be consciously registered*.

Computing does *not* require conscious self-aware emotional/identity-based cognition. A computer can reason, but can it have a *sense of identity*? As of now, the latter is stuff of science fiction. I may be wrong. Dennett's "Kinds of Minds" is on my table unread as of yet - maybe he's on to it.

PS: Help us out with a definition of Kolmogorov complexity, been a while since I wrestled with that term.



28 posted on 09/03/2003 8:07:08 PM PDT by WOSG (Lower Taxes means economic growth)
[ Post Reply | Private Reply | To 26 | View Replies]

To: tortoise
"Intelligence is a measurable property of a machine (both "intrinsic" and "apparent"). "

um, what does 'intelligence' measure though?

For clarity - define "intelligence".

"Awareness (both intrinsic and apparent) is a function of the apparent intelligence of the machine, but requires substantial apparent intelligence before apparent awareness can become significant."
Yes awareness is requires intelligence, but this is not a complete description of it, and this does not imply great computing power automatically yields awareness.
A retarded gorilla has perhaps more self-awareness than a teraflops computing machine, but less 'intelligence', depending on above definition. Hence the need for clarity on def'n of intelligence.

Computing power is a far different thing from consciousness, is my point.
29 posted on 09/03/2003 8:17:28 PM PDT by WOSG (Lower Taxes means economic growth)
[ Post Reply | Private Reply | To 26 | View Replies]

To: tortoise
"Actually, Searle's "Chinese Room" is premised on a flawed definitional assumption. However, when John Searle original wrote it the flaw would not have been obvious with the state of the relevant mathematics at that time. It is not generally considered a credible argument in "hard" theoretical AI research circles because of this foundational flaw (and if those guys can agree on anything, it is probably correct)."

I'm game ... what is the flawed definitional assumption in Searle's argument?

30 posted on 09/03/2003 8:19:31 PM PDT by WOSG (Lower Taxes means economic growth)
[ Post Reply | Private Reply | To 26 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-30 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson