Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Singularity Summit At Stanford Explores Future Of 'Superintelligence'
KurzweilAI.net ^ | 4/13/2006 | Staff

Posted on 04/13/2006 7:22:29 AM PDT by Neville72

The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.

The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."

"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."

Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.

The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.

Among the issues to be addressed:

Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?

Doctorow: Will our technology serve us, or control us?

Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?

Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?

Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?

More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?

Peterson: How can we safely bring humanity and the biosphere through the Singularity?

Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?

Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?

The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.

The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.


TOPICS: Miscellaneous
KEYWORDS: ai; borg; computer; cyborg; evolution; evolutionary; exponentialgrowth; future; futurist; genetics; gnr; humanity; intelligence; knowledge; kurzweil; longevity; luddite; machine; mind; nanotechnology; nonbiological; physics; raykurzweil; robot; robotics; science; singularity; singularityisnear; spike; stanford; superintelligence; technology; thesingularityisnear; transhuman; transhumanism; trend; virtualreality; wearetheborg
Navigation: use the links below to view more comments.
first previous 1-20 ... 41-6061-8081-100 ... 121-131 next last
To: Moonman62

Eventually Moore's law cannot keep holding up, as long as we keep using silicon chips.

And so I do think we will start using bio circuits. Once computers are flesh, will they then have a soul? I still think no, but that's another line of discussion altogether.


61 posted on 04/13/2006 10:08:40 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 57 | View Replies]

To: tpaine
Aren't you ignoring the fact that all animals have free will, even though many are not self aware?

It may be that true intelligence would require both self-awareness and free will. And the free will question still puts us back into the old deterministic v non-deterministic question. Even so, I'm not entirely convinced most animals do have free will. They may, but I don't know how you'd prove that one way or another.
62 posted on 04/13/2006 10:12:39 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 45 | View Replies]

To: Fitzcarraldo; tpaine

Sorry if I was unclear. I agree, we are more than the sum of our computational powers.


63 posted on 04/13/2006 10:13:22 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 59 | View Replies]

To: tpaine

Very interesting, although that post does heap plenty of skepticism on it. Interesting as heck, though.

I would also like to add that JamesP81 is right... superintelligence isn't the issue nearly as much as what a "dumb" computer could do under the control of bad human beings.


64 posted on 04/13/2006 10:13:30 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 37 | View Replies]

To: Diamond
I think the phrase "artificial intelligence" is a contradiction in terms, a deliberate misuse of language to promote a certain agenda. There is really no such thing.

"Machine learning" is a reasonable term for the process they are trying to perfect. Again, can a machine "know that it exists" in the same sense as a human? Who can even say any human other than ourselves is self-aware and not an automaton? We assume it's true on the basis of outward actions and responses.

65 posted on 04/13/2006 10:17:34 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 55 | View Replies]

To: Jack Black
I think the track record of Dr. Kurzweil is pretty impressive and I would not bet against him.

It's one thing to make a better program or a faster processor. It's quite another to invent a living, self-aware creature, if that's in fact what an AI would be.
66 posted on 04/13/2006 10:17:48 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 48 | View Replies]

To: tpaine
Regardless of how intelligence begins -- whether spiritual or physical -- it seems to me there must be a spark, a jump-start, a something-else beyond computing ability. We're not the sum of our brain's computing power.

There are a small number of AI programmers and computer scientists that have the idea that true intelligence, of the human kind, is only possible because of a spiritual influence; the soul. I suppose I would count myself among them; I don't think a being the equal of a human can exist without a soul, because without he can never equal a man. The open question is if God is feeling like AI is a good thing and gives a soul to some AI we try to build or not.

This, of course, doesn't even qualify as educated guesswork. It's a way out there WAG and we won't know if it's right or wrong for a long time. We may never know.
67 posted on 04/13/2006 10:21:20 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 37 | View Replies]

To: JamesP81
It's all crap.

Wow! That's quite a bold statement.

It's a bunch of people who read too much scifi and wish it were real.

No, it's not. As posted above Dr. Kurzweil is probably the closest thing we have to Thomas Edison in the 2nd half of the 20th Century.

The problem you run into is that the self-aware human mind exhibits some qualities, some of which are difficult to put a finger on, that a solid-state electronic computer is physically incapapble of reproducing, no matter how complicated it is.

So you say. But a AI need not neccessarily reproduce "some qualities" of the human mind to achieve sentience. Also, what it is possible to do with computers is constantly increasing. Today they can understand continuous human speech, as mentioned previously 20 years ago even AI researchers thought this might be impossible.

A computer program can be theoretically modeled with something called a state-transition diagram. This diagram represents every single possible state the computer could be in ... The human brain does not work this way,

Are you sure? What if you could disassemble a brain at the atomic level (atom by atom) and reassemble it.

unless we truly are the sum of our parts.

which I think many of the Singularity people would assert. My own take is we don't know enough to say with assurance either way.

Human beings come with some basic 'software' installed. We call them instincts. Unlike a computer, which has no choice but to obey its programming, we can ignore our own instincts if we choose to.

We can't ignore our instinct to breath, or have our heart beat. One of the requirements for AI is that computers or AI's have volition, the ability to choose things. This certainly seems possible that they will get to.

I think we do have free will, a precious gift granted to mankind by no less than God Himself. Anyway, that's my personal opinion. Your mileage will probably vary.

I think we have free will. I think we will build computers that have free will. I don't see the existence of a God as needed to hold these beliefs, nor do I see these beliefs as absolutely contradicting the existence of God.

As long as computers are built with solid state components, I think it's physically impossible for them to have intelligence,

You've stated that several times, but you haven't really explained why you have this belief. Or at least your argument seems circular to me.

Anyway, these people are a little crazy, in my opinion.

Probably. Most innovators are a little crazy.

Creating true AI is not as simple as they make it sound,

Here, I agree with you. Some of them talk about it like it is already accomplished. Then again no one thought computers would beat humans at chess when I was a kid. Now most people can't beat the $49 chess program you buy at Borders.

and it may not be desireable either.

True. But it probably won't be stopped. Nukes were perhaps not desirable, but we have them. Bill Joy has argued that we are so far ahead of our morality with our technology that we must stop work on this now. But, outside of the minds of one-world, UN utopians there is no controlling authority for scientific research. Thus, if it can happen, it will happen.

These people are ahead of themselves.

Well if there is even a chance that Kurzweil's predictions could be correct, self-aware turning test passing AI's by 2029, we need to be having a LOT more discussion about it, not less. These people may be ahead of themselves, but we as a society are probably lagging behind a bit.

68 posted on 04/13/2006 10:22:42 AM PDT by Jack Black
[ Post Reply | Private Reply | To 27 | View Replies]

To: InterceptPoint

"You knock the ideas that these people have but they are well thought through and documented in spades. Have a look at the book the next time you are in Borders. You may be surprised."



I read the book a couple of months ago and was equally impressed with the documentation. I came away with one overriding impression. What Kurzweil predicts will happen, in general terms and plus or minus a few years, is inevitable.

Even if a group of countries or even a majority of the world's countries concluded that nanotechnology or AI was too dangerous and had to be banned, it would merely go underground and would emerge anyway, probably in the hands of someone immensely dangerous. Better to have everyone working on ways to insure it's safe, than have it in the hands of a few crazies.


69 posted on 04/13/2006 10:23:12 AM PDT by Neville72 (uist)
[ Post Reply | Private Reply | To 46 | View Replies]

To: Neville72
"Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades....

That's a conference I'd like to attend.

It would be interesting to see how they address the issue of imbuing the property of "desire" (as opposed to merely programmed logic) into artificial intelligence.

No need being overly concerned until they do.

70 posted on 04/13/2006 10:25:13 AM PDT by nightdriver
[ Post Reply | Private Reply | To 1 | View Replies]

To: Fitzcarraldo
"-- I think we do have free will, a precious gift granted to mankind by no less than God Himself. --"

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Aren't you ignoring the fact that all animals have free will, even though many are not self aware?

The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.

Fitz:
Most animals don't make/use tools, in the main, save useful adaptations and behaviours they have been endowed with. It is possible, however, someday we'll see an ape fashion a ladder and escape from a zoo.

No kidding? Does this insight have anything to do with my comment about free will & intelligence?

It's been definitely demonstrated that humans can make human and inhuman tools.

Again, you're making point not in contention. Why?

The ultimate inhuman tool could be SI/nanotech (would the acronym SIN be appropo?). Food for thought.

Ahh, I see; -- you want to make 'sin' the point.. Is it a sin to make the 'wrong' tools?

-- Ask your friendly ATF agent about making a machine gun. -- Then give some thought about who gets to decree what tools are to be "sinful".

71 posted on 04/13/2006 10:25:59 AM PDT by tpaine
[ Post Reply | Private Reply | To 52 | View Replies]

To: Fitzcarraldo
hope you are right. I favor a quarentined SI/nanotech solution, with assured "reboot" capability, maybe to the surface of the Moon or Venus.

I'm not sure that would be far enough away. With a strong enough transmitter, it could still access the internet through our comm satellites. If from venus, it's signal would suffer from about a 6-8 minute delay, but it would still be feasible.

The reboot option I'd use would be a nuclear missile that didn't have a single communications device attached to it, and had to be fired mechanically.
72 posted on 04/13/2006 10:26:05 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 43 | View Replies]

To: NoStaplesPlease

Does the brain even "compute" deterministically, like an Intel CPU? Or does it converge using myriad neuronal feedback loops on a match between an apparent "goal" and its apparent satisfactory conclusion? Enormously inefficient perhaps from an electronic engineer's point of view, but remarkably capable,of that there is no doubt. The threat to "wetware", of course, is the blinding speed of modern electronics.


73 posted on 04/13/2006 10:27:36 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 64 | View Replies]

To: JamesP81
Computers that function on a non-deterministic principle have the potential to have intelligence or self-awareness. The only two ways to *maybe* accomplish this that I can see is to either use wetware or quantum computing.

Before you get too far into your hypothesizing, you do realize that all these computing models (and vanilla silicon) are completely computationally equivalent, right? Not just at a handwavy high level but at a fundamental mathematical level. If we accept your assumption, then we can trivially prove that vanilla silicon is fully capable of all those things. And "non-determinism" does not really have the implications that you seem to think it does with respect to computation.

You might need to double check some of your assumptions and explore the mathematical relationships between some of the terms you are using.

74 posted on 04/13/2006 10:29:06 AM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 27 | View Replies]

To: NoStaplesPlease
I totally disagree. Maybe the religious right would hold out for some fundamentalist interpretation and "soul test". But I would certainly support full rights for Replicants. I mean the whole point of the last scene was the Rutgar *was* a self-aware, beauty appreciating individual intensly aware of his own mortality.

That is far and away enough to qualify for civil rights.

75 posted on 04/13/2006 10:31:31 AM PDT by Jack Black
[ Post Reply | Private Reply | To 56 | View Replies]

To: Moonman62
Kurzweil is a big time self promoter and carnival barker, so I'd be suspicious of any of his claims. As far as AI goes, we'll probably end up making them organic like the brain already is rather than something like integrated circuits.

I agree, and it opens one hell of a can of worms if we do it, too. If you make a program or some hardware that's self-aware, you can always say that it's not alive; it's made of circuits and silicon, all dead material. And you'd have a good case. But if you make something artificially intelligent with a living brain, you change the whole paradigm. At that point, calling it a computer may not be accurate anymore. It may be that it has become a person, with a metal body. You can bet the legal fur would start flying then.
76 posted on 04/13/2006 10:34:31 AM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 57 | View Replies]

To: JamesP81
The reboot option I'd use would be a nuclear missile that didn't have a single communications device attached to it, and had to be fired mechanically.

The "reboot" option, paradoxically, might require a "pacified" SI/nanotech response.

We could be opening quite a Pandora's box.

77 posted on 04/13/2006 10:39:49 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 72 | View Replies]

To: tpaine
The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.

No system has the ability to know with certainty its next action. This is an elementary theorem used in many areas of mathematics and used so pervasively most people do not even recognize that they are using it. It is the reason, for example, that one can never guarantee with perfect certainty that something is in a particular state (the basic interest of transaction theory), though we treat very high probabilities of a particular state as "perfect certainty" as a practical matter.

78 posted on 04/13/2006 10:41:16 AM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 45 | View Replies]

To: Diamond
I really don't agree with or see the basis of his arguments. Our experience so far has not been that technology has progressively enslaved men. Indeed liberty and choice are much more in evidence now then they were in most other eras of humna history.

I know a lot of Christians think C.S. Lewis is some awesome philosopher, but as this example shows, I think not.

79 posted on 04/13/2006 10:41:17 AM PDT by Jack Black
[ Post Reply | Private Reply | To 55 | View Replies]

To: Neville72
Imagine, we will be the generation responsible for both the formation of the 'Singularity' and Britney Spears.

Maybe if the Singularity get's out of hand we can just feed it Britney... that should slow it down.
80 posted on 04/13/2006 10:43:12 AM PDT by Daus
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 41-6061-8081-100 ... 121-131 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson