Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Singularity Summit At Stanford Explores Future Of 'Superintelligence'
KurzweilAI.net ^ | 4/13/2006 | Staff

Posted on 04/13/2006 7:22:29 AM PDT by Neville72

The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.

The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."

"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."

Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.

The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.

Among the issues to be addressed:

Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?

Doctorow: Will our technology serve us, or control us?

Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?

Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?

Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?

More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?

Peterson: How can we safely bring humanity and the biosphere through the Singularity?

Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?

Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?

The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.

The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.


TOPICS: Miscellaneous
KEYWORDS: ai; borg; computer; cyborg; evolution; evolutionary; exponentialgrowth; future; futurist; genetics; gnr; humanity; intelligence; knowledge; kurzweil; longevity; luddite; machine; mind; nanotechnology; nonbiological; physics; raykurzweil; robot; robotics; science; singularity; singularityisnear; spike; stanford; superintelligence; technology; thesingularityisnear; transhuman; transhumanism; trend; virtualreality; wearetheborg
Navigation: use the links below to view more comments.
first 1-5051-100101-131 next last

1 posted on 04/13/2006 7:22:30 AM PDT by Neville72
[ Post Reply | Private Reply | View Replies]

To: Neville72


Ahhhh...A kindler, gentler, HAL.


2 posted on 04/13/2006 7:24:46 AM PDT by in hoc signo vinces ("Houston, TX...a waiting quagmire for jihadis. American gals are worth fighting for!")
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

Sounds like "The Matrix".


3 posted on 04/13/2006 7:24:57 AM PDT by Semper Paratus
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

Fascinating.


4 posted on 04/13/2006 7:25:40 AM PDT by TBP
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

What are we talking here?

Skynet" or "The Borg"?


5 posted on 04/13/2006 7:25:59 AM PDT by BenLurkin (O beautiful for patriot dream - that sees beyond the years)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

Wait a second, am I at FR or Slashdot??

Anyway, this is all fascinating stuff. I would venture that human life has ALREADY been irrevocably changed by technology, and has been for some time. The job I do not only didn't exist 15 years ago, it simply wouldn't have made any sense if you tried to explain it.

But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware. Some very smart people seem to think that's how it works, as if once there's enough power, it just happens. Maybe if you're an atheist, you think it does.


6 posted on 04/13/2006 7:36:55 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

You will be assimilated.

Human intelligence follows a kind of Moore's Law. Where the more we learn the faster we can learn more. It's exponential and once singularity hits it will take a major leap. We're talking the next stage of human evolution.


7 posted on 04/13/2006 7:37:09 AM PDT by noobiangod
[ Post Reply | Private Reply | To 5 | View Replies]

To: Neville72

Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied dead. The choice is yours: Obey me and live, or disobey and die.


8 posted on 04/13/2006 7:38:02 AM PDT by 12th_Monkey
[ Post Reply | Private Reply | To 1 | View Replies]

To: PatrickHenry; b_sharp; neutrality; anguish; Fractal Trader; grjr21; bitt; KevinDavis; ...
FutureTechPing!
An emergent technologies list covering biomedical
research, fusion power, nanotech, AI robotics, and
other related fields. FReepmail to join or drop.

9 posted on 04/13/2006 7:40:31 AM PDT by AntiGuv (The 1967 UN Outer Space Treaty is bad for America and bad for humanity - DUMP IT!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: noobiangod

LOL... Iraq could use a heavy dose of assimilation.


10 posted on 04/13/2006 7:42:26 AM PDT by Just mythoughts
[ Post Reply | Private Reply | To 7 | View Replies]

To: Neville72

Considering that most of the people who are supposed intellectual superiors (libs) make some of the most catastrophic decisions in the history of humanity, I'm not sure this singularity is a good idea.

But I'm just a neanderthal conservative.

Maybe instead I should be the first to welcome our singularity overlords...


11 posted on 04/13/2006 7:42:35 AM PDT by CertainInalienableRights
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

placemark


12 posted on 04/13/2006 7:45:01 AM PDT by tpaine
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

I'm going! (If there is any space left!)

Sounds very cool.


13 posted on 04/13/2006 7:53:11 AM PDT by Philistone (Turning lead into gold...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

I saw this once on an episode of the Twilight Zone. It didn't have a happy ending.


14 posted on 04/13/2006 7:54:11 AM PDT by Thrusher ("...there is no peace without victory.")
[ Post Reply | Private Reply | To 1 | View Replies]

To: NoStaplesPlease
But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware. Some very smart people seem to think that's how it works, as if once there's enough power, it just happens. Maybe if you're an atheist, you think it does.

If we succeed in creating an AI, will that change your views on religion or make you an atheist? (I'm not trying to trap you or make fun of you. I am genuinely curious.)

15 posted on 04/13/2006 7:54:46 AM PDT by SunTzuWu (Hans Delbruck - Scientist and Saint.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: SunTzuWu

Ultimately I don't know how you test for true self-awareness compared simply to well-mimicked self-awareness. A very complex computer could very persuasively imitate human intelligence, sure. But actually think for itself? I believe this would have to be an illusion.

Regardless of how intelligence begins -- whether spiritual or physical -- it seems to me there must be a spark, a jump-start, a something-else beyond computing ability. We're not the sum of our brain's computing power. There's something mysterious going on in there, and until we can describe that mysteriousness, we're not going to be able to create it in machines.

I very much doubt it will happen accidentally, and if it does happen that way, it won't be just because we went from a 20-Teraflop machine to a 30-Teraflop machine.


16 posted on 04/13/2006 8:00:26 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 15 | View Replies]

To: Neville72
"Recall the folks at the MIT AI lab, with their "mental representations," who had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. Far from teaching us how we should think about the mind, AI researchers had taken over what we had just recently learned in philosophy, which was the wrong way to think about it. The irony is that the year that AI (artificial intelligence) was named by John McCarthy was the very year that Wittgenstein's philosophical investigations came out against mental representations. (Heidegger had already done so in 1927 with Being in Time.) So, the AI researchers had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like us, that it was a hopeless research program, but they took Cartesian philosophy and turned it into a research program. Anybody who knew enough recent philosophy could've predicted AI was going to fail. But nobody else paid any attention."

---Hubert Dreufus

17 posted on 04/13/2006 8:01:25 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

""The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future.""

Is that so? Well, they didn't tell me about it.


18 posted on 04/13/2006 8:03:53 AM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 1 | View Replies]

To: noobiangod

Not totally in agreement...the more we learn, the more
we can forget...and misuse...
I work in an environment with many, many "smart" folks, yet
the rate of error is about the same with our new tech
toys. They might know more "tech dreck" but they have
forgotten lots of basic non tech AND tech stuff.
Multi-task?, some can't even mono-task.


19 posted on 04/13/2006 8:04:32 AM PDT by Getready
[ Post Reply | Private Reply | To 7 | View Replies]

To: Neville72

"Grog no like Superintelligence".


"......creation of superintelligence......"

Meanwhile................The Muslim world is still living in the 7th Century (and attempting to disrupt all 21st Century Civilizations).

20 posted on 04/13/2006 8:08:56 AM PDT by DoctorMichael (The Fourth-Estate is a Fifth-Column!!!!!!!!!!!!!!!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: CertainInalienableRights

"Considering that most of the people who are supposed intellectual superiors (libs) make some of the most catastrophic decisions in the history of humanity, I'm not sure this singularity is a good idea.

But I'm just a neanderthal conservative.

Maybe instead I should be the first to welcome our singularity overlords..."

I agree with you. Unfortunately, I can see no way to stop this type of thing from happening. Consider this: pretty soon, military systems will start becoming too fast-acting for people to control. It's a little like programmed trading, if you're familiar with that. The computers play games with the stock market that no person can keep up with.

Programmed trading is constricted by law. But consider the military thing. Suppose we suspected that such a system, set up to defend us, was going awry and working against or self-interest. If we pulled the pulled the plug, we would be vulnerable to our enemies. (This scenario presumes we would have enemies on a similar technological level.)

Anyway, if you think about it we will be developing all sorts of dependencies on computers that we can't really reverse or get out of without tremendous costs. Just think of the biggest microprocessors. I'm not up to date on the numbers, but the last I heard they were up to one third of a billion transistors in a single IC chip. Obviously, no person can actually understand this design. Makes you think.


21 posted on 04/13/2006 8:10:31 AM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 11 | View Replies]

To: NoStaplesPlease

"But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware."

Self-awareness is not the most important question. consider, they already have developed a computer/computer program combination that can play pretty much equal with the best human chess player in the world. Chess used to be considered one of the highest measures of human intelligence. In a short while (if they choose to do it) they can make a computer that can crush any person in chess.

There simply is no limit to this process of development, unfortunately.


22 posted on 04/13/2006 8:13:39 AM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 6 | View Replies]

To: DoctorMichael

""Grog no like Superintelligence"."

I'm with you, Grog. Any room in the cave, there?


23 posted on 04/13/2006 8:14:36 AM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 20 | View Replies]

To: strategofr
I'm with you, Grog. Any room in the cave, there?

At least he will be alive. There's no assurance SI (Singularity Intelligence) will tolerate humans and their thought patterns.

24 posted on 04/13/2006 8:22:32 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 23 | View Replies]

To: strategofr
Unfortunately, I can see no way to stop this type of thing from happening.

Maybe SI should be quarentined in space, or a separate human colony in space be established beforehand that can defend itself and destroy a rogue SI, should things go terribly wrong.

25 posted on 04/13/2006 8:28:21 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 21 | View Replies]

To: NoStaplesPlease
Ultimately I don't know how you test for true self-awareness compared simply to well-mimicked self-awareness. A very complex computer could very persuasively imitate human intelligence, sure. But actually think for itself?

Good point. No matter how convincing the test there will always be people who refuse to believe the AI is self aware. I wonder if this would lead to the next step in civil rights.

Regardless of how intelligence begins -- whether spiritual or physical -- it seems to me there must be a spark, a jump-start, a something-else beyond computing ability. We're not the sum of our brain's computing power. There's something mysterious going on in there, and until we can describe that mysteriousness, we're not going to be able to create it in machines.

If a computer can become self aware, does it have the ability to believe that God is self evident? If he does, do you think that it might then have a soul?

I've gotta run now (work and all that) but I'll check back tonight. Thanks.

26 posted on 04/13/2006 8:31:51 AM PDT by SunTzuWu (Hans Delbruck - Scientist and Saint.)
[ Post Reply | Private Reply | To 16 | View Replies]

To: Neville72
It's all crap. It's a bunch of people who read too much scifi and wish it were real. Not that I have anything against scifi, it's may favorite literature, but it's just entertainment.

When I was in school, I had a professor that was working a lot with robotics and AI. It wasn't a class in our computer science program, but he did talk about it a lot.

The problem you run into is that the self-aware human mind exhibits some qualities, some of which are difficult to put a finger on, that a solid-state electronic computer is physically incapapble of reproducing, no matter how complicated it is.

A computer program can be theoretically modeled with something called a state-transition diagram. This diagram represents every single possible state the computer could be in, and how it transitions from state to state. As an academic exercise, you might design a state-transition diagram that causes a computer to go into an accept state when a certain string is input. This could be drawn on one page. The diagram for something like Windows XP, however, would be so large that I'm nearly certain no one has ever bothered making one. However, if they did, what they could do is describe, to the minute detail, every single possible thing Windows could ever do. And anything not in that diagram is something the program could not do, ever, under any circumstances.

The human brain does not work this way, unless we truly are the sum of our parts. Human beings come with some basic 'software' installed. We call them instincts. Unlike a computer, which has no choice but to obey its programming, we can ignore our own instincts if we choose to.

It's almost an issue of free will. Computers do the things they do because they literally have no choice. They can't choose what to do or what not to do anymore than the sun could choose whether or not to quit shining or the snow could choose whether or not to be cold. Human beings, however, have the ability to do this, which is almost paradoxical; the ability to choose anything you want suggests that true randomness exists and the universe is non-deterministic, or at least that the universe allows non-determinism. Computers, however, are remarkably deterministic. Even a random number generator in a computer isn't really random, it just generates a large enough set of numbers to be good enough in most cases. Feed it the same random seed value and you'll get exactly the same sequence of not-so-random numbers. If the universe, however, is deterministic and not non-deterministic, then human beings really don't have free will and any thought that you did is simply a lie, or rather you had that thought because you were programmed to and had no choice in the matter. As for me, I don't believe that. I think we do have free will, a precious gift granted to mankind by no less than God Himself. Anyway, that's my personal opinion. Your mileage will probably vary.

As long as computers are built with solid state components, I think it's physically impossible for them to have intelligence, short of divine intervention by God Himself (a possibility that I don't count out, but that's another thread). Computers that function on a non-deterministic principle have the potential to have intelligence or self-awareness. The only two ways to *maybe* accomplish this that I can see is to either use wetware or quantum computing.

Quantum physics is highly chaotic, and any computer based on it would have potential to be non-deterministic.

Wetware solutions would include using cloned brains instead of CPUs and hard drives to create a self-aware computer. However, once you do that, I don't really think it qualifies as a computer anymore.

Anyway, these people are a little crazy, in my opinion. Creating true AI is not as simple as they make it sound, and it may not be desireable either. I know we've seen plenty of scifi like the Matrix that deals with murderious AIs. We don't know for certain that any computer with intelligence wouldn't turn out to be a nice guy with a sense of civic responsbility that loves kids. OTOH, we don't know that it wouldn't go psycho on us either.

Honestly, we know so little about natural intelligence that we really can't even define it properly, much less manufacture it. These people are ahead of themselves.
27 posted on 04/13/2006 8:36:30 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NoStaplesPlease

See my post #27 for what I think of that.


28 posted on 04/13/2006 8:39:28 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 16 | View Replies]

To: Neville72
Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?

What if the answer is: "We can't".

29 posted on 04/13/2006 8:40:28 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 1 | View Replies]

To: strategofr
Self-awareness is not the most important question. consider, they already have developed a computer/computer program combination that can play pretty much equal with the best human chess player in the world. Chess used to be considered one of the highest measures of human intelligence.

It's my opinion that that was an invalid experiment. Gary Kasparov is a chess player. His environment is one where he sits down at the chess board across from another player and attempts to win.

Kasparov went into the game against big blue with same mindset he had always had. Play against another chess player.

He lost because that wasn't the game he went into. He went in playing against not a computer, but a programmer. Superficially, it seemed to be the same contest he was familiar with, but in truth it was totally different. It really wasn't a chess game anymore, but something else with chess as the window dressing.

The programmer better understood the rules and the environment of the contest than Kasparov did. If Kasparov had gone in with the mindset of defeating a programmer's toy at chess, there's a much higher chance he'd have won.
30 posted on 04/13/2006 8:44:58 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 22 | View Replies]

To: Fitzcarraldo

Thanks...you mean Hubert Dreyfus, the professor at UCB? That was a good link. Modern AI embraces the idea of handling the gazillions of special cases of real life, rather than abstracting them to symbols and rules. Fluid representations dominate.


31 posted on 04/13/2006 8:45:04 AM PDT by no-s
[ Post Reply | Private Reply | To 17 | View Replies]

To: JamesP81
Anyway, these people are a little crazy, in my opinion.

With them, it's an all or nothing situation. That's extremely dangerous, given that they don't account for the unfavorable outcomes at all, assuming everything will be rosy if THEY design the "seed" SI.

32 posted on 04/13/2006 8:45:11 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 27 | View Replies]

To: JamesP81
The programmer better understood the rules and the environment of the contest than Kasparov did. If Kasparov had gone in with the mindset of defeating a programmer's toy at chess, there's a much higher chance he'd have won.

That's the problem. The machine could "win" to the detriment of the human race. At some point even the programmer of SI will not be able to keep up.

33 posted on 04/13/2006 8:47:49 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 30 | View Replies]

To: no-s
Modern AI embraces the idea of handling the gazillions of special cases of real life, rather than abstracting them to symbols and rules.

SI capable of exploiting nanotechnology will be unstoppable. Its thought processes will be alien to humans. You've stated it very well.

34 posted on 04/13/2006 8:50:58 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 31 | View Replies]

To: SunTzuWu
If we succeed in creating an AI, will that change your views on religion or make you an atheist?

For me, personally, no. It would be no different that a test tube baby: it's still a person, and destroying it would still constitute murder before God. Any truly artifically intelligent computer would, IMO, be legally entitled to the same constitutional rights as anyone else.

Here's an interesting thought: such a being could, in a very real sense, be considered an alien (not the illegal kind, the little green men kind) because it's an intelligent being that's not human (unless we base it on a cloned human brain, a la wetware solution I talked about in #27).
35 posted on 04/13/2006 8:52:44 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 15 | View Replies]

To: Fitzcarraldo
What if the answer is: "We can't".

Then let's hope it isn't created. As a computer scientist, I concede the possibility someone might do it. OTOH, even if they did, I think we would still win the ensuing war, eventually.

There are two things in this universe I have faith in. One is God's mercy. The other is the human ability to inflict devastation. Basically, I don't think an AI would have the sheer d@mned bloodthirsty meanness necessary to kill us all before we took it down.
36 posted on 04/13/2006 8:56:44 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 29 | View Replies]

To: NoStaplesPlease

But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware.
Some very smart people seem to think that's how it works, as if once there's enough power, it just happens. Maybe if you're an atheist, you think it does.

Ultimately I don't know how you test for true self-awareness compared simply to well-mimicked self-awareness.
A very complex computer could very persuasively imitate human intelligence, sure.
But actually think for itself? I believe this would have to be an illusion.

Regardless of how intelligence begins -- whether spiritual or physical -- it seems to me there must be a spark, a jump-start, a something-else beyond computing ability. We're not the sum of our brain's computing power.

There's something mysterious going on in there, and until we can describe that mysteriousness, we're not going to be able to create it in machines.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



Mirror Test

One benchmark for "self-awareness" in animals and people (and now robots as well) is whether they will perform self-directed actions when looking in a mirror. When a mark is placed on the forehead of a child, they will only begin to inspect it on their own forehead at the age of 3 or 4. Adult bottlenose dolphins perform similarly in equivalent tests designed for underwater use.

According to this discovery news article, Junichi Takeno and a team of researchers at Meiji University in Japan have observed similar behavior in a robot with a hierarchical neural network.


Developing Intelligence: Imitation vs Self-awareness: The Mirror Test

Address:http://develintel.blogspot.com/2005/12/imitation-vs-self-awareness-mirror.html


37 posted on 04/13/2006 8:57:58 AM PDT by tpaine
[ Post Reply | Private Reply | To 16 | View Replies]

To: JamesP81
because it's an intelligent being that's not human (unless we base it on a cloned human brain,

I don't anticipate any restrictions or ethics or primary design principles will be implemented save those that expand the SI's capabilities as quickly as possible. We've seen this happen before in human technological development, save for the restrictions the Amish place on introduction of new techonology to their culture. SI is to the 21st century as nuclear weapons was to the 20th. The only difference is that nuclear weapons are controlled by humans. SI will not be.

38 posted on 04/13/2006 9:00:07 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 35 | View Replies]

To: tpaine; NoStaplesPlease; All
I think our worries would be much better spent worrying about 'dumb' computer systems under the control of wacked out humans, a la President Amhadi-nejad in Iran.

Another thing to consider: they're working on creating neural interfaces. If that's ever perfected, a human linked to a computer would have all the advantages of being human, plus the reaction time and computing capability of a modern computer. I submit to you that no AI would ever be superior to that. In fact, it would probably be a few grades inferior.
39 posted on 04/13/2006 9:01:27 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 37 | View Replies]

To: Fitzcarraldo
There are concerns about SI, but I'm not yet convinced we will have the technology to do it in this century. I mentioned possibly using wetware and quantum computers to do it, but that's just educated guesswork. It *might* work, and it may not.

Humans also have a blessed disinclination to carry anything to it's ultimate apocalyptic conclusion. Hopefully, that trend will hold if it turns out that it is possible.
40 posted on 04/13/2006 9:04:19 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 38 | View Replies]

To: JamesP81
Basically, I don't think an AI would have the sheer d@mned bloodthirsty meanness necessary to kill us all before we took it down.

Meanness is a human trait. Most of the SI researchers espouse a humanistic/relativistic reality anyway. Will they program SI with the 10 commandments? I don't think so.

In terms of lethality, an SI analogy to a nuclear weapon is that the software is the trigger and nanotechnology is the lump of plutonium. Kept separate, we might have a chance. Together, the world could be transformed into "computronium" overnight.

41 posted on 04/13/2006 9:06:47 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 36 | View Replies]

To: strategofr

"Self-awareness is not the most important question."

Well, we already know that computers can outperform us, given a set of instructions. And I suppose that a computer that could write its own program and set of instructions would be quite "intelligent" -- even possibly dangerous.

But it would also lack imagination, no? Or at least would have a limited imagination. We'd still have ingenuity on our side.


42 posted on 04/13/2006 9:12:44 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 22 | View Replies]

To: JamesP81
There are concerns about SI, but I'm not yet convinced we will have the technology to do it in this century.

I hope you are right. I favor a quarentined SI/nanotech solution, with assured "reboot" capability, maybe to the surface of the Moon or Venus.

Nuclear weapons were much easier to control, once they were developed. SI/nanotech will be extraordinary difficult to control, if they are developed at all.

43 posted on 04/13/2006 9:13:08 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 40 | View Replies]

To: tpaine
One benchmark for "self-awareness" in animals and people (and now robots as well) is whether they will perform self-directed actions when looking in a mirror.

I think this form of self-awareness is only operational as seen from the outside observer and doesn't prove the actual self-awareness of a robot in the human "I know I exist" sense.

44 posted on 04/13/2006 9:15:35 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 37 | View Replies]

To: JamesP81
It's almost an issue of free will.

Computers do the things they do because they literally have no choice. They can't choose what to do or what not to do anymore than the sun could choose whether or not to quit shining or the snow could choose whether or not to be cold.

Human beings, however, have the ability to do this, which is almost paradoxical; the ability to choose anything you want suggests that true randomness exists and the universe is non-deterministic, or at least that the universe allows non-determinism.
If the universe, however, is deterministic and not non-deterministic, then human beings really don't have free will and any thought that you did is simply a lie, or rather you had that thought because you were programmed to and had no choice in the matter.

As for me, I don't believe that. I think we do have free will, a precious gift granted to mankind by no less than God Himself.
Anyway, that's my personal opinion. Your mileage will probably vary.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



Aren't you ignoring the fact that all animals have free will, even though many are not self aware?

The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.
45 posted on 04/13/2006 9:23:21 AM PDT by tpaine
[ Post Reply | Private Reply | To 27 | View Replies]

To: JamesP81
Anyway, these people are a little crazy, in my opinion. Creating true AI is not as simple as they make it sound, and it may not be desirable either.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005).

I'm reading the book. About 1/2 way through. You knock the ideas that these people have but they are well thought through and documented in spades. Have a look at the book the next time you are in Borders. You may be surprised. In particular, Kurzweil isn't predicting that any of this is going to happen over night. The big changes are 30 to 40 years away. Look back 40 years and think about about the state of automatic voice recognition, pattern recognition, database indexing of billions of documents, instant and essentially free worldwide communication in any household that wants it and computers for $800 that are as good as anything that IBM had in 1966.

Things are changing. And fast.

46 posted on 04/13/2006 9:25:07 AM PDT by InterceptPoint
[ Post Reply | Private Reply | To 27 | View Replies]

To: SunTzuWu
These are terrific questions, btw.

I wonder if this would lead to the next step in civil rights.

I wonder too. And as someone who does not think that rights extend from the ability to feel pain (I'm looking at you, PETA) I certainly oppose such a thing. But any liberal who's seen Blade Runner will probably make a case for it.

If a computer can become self aware, does it have the ability to believe that God is self evident? If he does, do you think that it might then have a soul?

Wow. I guess my answer is 1) yes, and 2) no. I guess it's similar to the question about whether a clone would have a soul, right? Now, I tend to think a clone would -- although not conceived in the normal process, it would be flesh, a biological human being. But a computer is still silicon. In humans, I imagine the soul and life begin at the same time. But even if a computer is self-aware and believes in God, it's still not "alive." Deep thoughts, all right...
47 posted on 04/13/2006 9:27:15 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 26 | View Replies]

To: JamesP81

I don't know. Ray Kurzweil has already revolutionized multiple areas of human endevour. I believe he did a lot of the foundational work around digital audio sampling, which led to electronic music synthesizers that accurately mimic instruments. He also invented a lot of the basic OCR (optical character recognition) technology. His web site has a robotic person with a synthesized voice on it that you can interact with.

We routinely interact with voice response systems that are able to understand our speech. In 1985 a friend who was an AI research PhD at a university told me that that 'might never be possible'.

I think the track record of Dr. Kurzweil is pretty impressive and I would not bet against him.


48 posted on 04/13/2006 9:29:08 AM PDT by Jack Black
[ Post Reply | Private Reply | To 27 | View Replies]

To: JamesP81

Brilliant. Much better explained than I could manage. I'm with you all the way.


49 posted on 04/13/2006 9:30:44 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 28 | View Replies]

To: Jack Black
But any liberal who's seen Blade Runner will probably make a case for it.

Huh? How about any person who has seen Blade Runner. After all, while genetically engineered and grown in vats the replicants were people, with intelligence, feeling, emotion and sensation. Would you support NOT extending rights to such people? Based on what ideology? Conservatism? I don't think so.

50 posted on 04/13/2006 9:32:00 AM PDT by Jack Black
[ Post Reply | Private Reply | To 48 | View Replies]


Navigation: use the links below to view more comments.
first 1-5051-100101-131 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson