Skip to comments.Singularity Summit At Stanford Explores Future Of 'Superintelligence'
Posted on 04/13/2006 7:22:29 AM PDT by Neville72
The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.
The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.
"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."
"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."
Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.
The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.
Among the issues to be addressed:
Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?
Doctorow: Will our technology serve us, or control us?
Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?
Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?
Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?
More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?
Peterson: How can we safely bring humanity and the biosphere through the Singularity?
Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?
Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?
The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.
The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.
Ahhhh...A kindler, gentler, HAL.
Sounds like "The Matrix".
What are we talking here?
Skynet" or "The Borg"?
Wait a second, am I at FR or Slashdot??
Anyway, this is all fascinating stuff. I would venture that human life has ALREADY been irrevocably changed by technology, and has been for some time. The job I do not only didn't exist 15 years ago, it simply wouldn't have made any sense if you tried to explain it.
But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware. Some very smart people seem to think that's how it works, as if once there's enough power, it just happens. Maybe if you're an atheist, you think it does.
You will be assimilated.
Human intelligence follows a kind of Moore's Law. Where the more we learn the faster we can learn more. It's exponential and once singularity hits it will take a major leap. We're talking the next stage of human evolution.
Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied dead. The choice is yours: Obey me and live, or disobey and die.
LOL... Iraq could use a heavy dose of assimilation.
Considering that most of the people who are supposed intellectual superiors (libs) make some of the most catastrophic decisions in the history of humanity, I'm not sure this singularity is a good idea.
But I'm just a neanderthal conservative.
Maybe instead I should be the first to welcome our singularity overlords...
I'm going! (If there is any space left!)
Sounds very cool.
I saw this once on an episode of the Twilight Zone. It didn't have a happy ending.
If we succeed in creating an AI, will that change your views on religion or make you an atheist? (I'm not trying to trap you or make fun of you. I am genuinely curious.)
Ultimately I don't know how you test for true self-awareness compared simply to well-mimicked self-awareness. A very complex computer could very persuasively imitate human intelligence, sure. But actually think for itself? I believe this would have to be an illusion.
Regardless of how intelligence begins -- whether spiritual or physical -- it seems to me there must be a spark, a jump-start, a something-else beyond computing ability. We're not the sum of our brain's computing power. There's something mysterious going on in there, and until we can describe that mysteriousness, we're not going to be able to create it in machines.
I very much doubt it will happen accidentally, and if it does happen that way, it won't be just because we went from a 20-Teraflop machine to a 30-Teraflop machine.
""The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future.""
Is that so? Well, they didn't tell me about it.
Not totally in agreement...the more we learn, the more
we can forget...and misuse...
I work in an environment with many, many "smart" folks, yet
the rate of error is about the same with our new tech
toys. They might know more "tech dreck" but they have
forgotten lots of basic non tech AND tech stuff.
Multi-task?, some can't even mono-task.
Meanwhile................The Muslim world is still living in the 7th Century (and attempting to disrupt all 21st Century Civilizations).