Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Singularity Summit At Stanford Explores Future Of 'Superintelligence'
KurzweilAI.net ^ | 4/13/2006 | Staff

Posted on 04/13/2006 7:22:29 AM PDT by Neville72

The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.

The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."

"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."

Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.

The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.

Among the issues to be addressed:

Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?

Doctorow: Will our technology serve us, or control us?

Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?

Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?

Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?

More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?

Peterson: How can we safely bring humanity and the biosphere through the Singularity?

Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?

Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?

The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.

The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.


TOPICS: Miscellaneous
KEYWORDS: ai; borg; computer; cyborg; evolution; evolutionary; exponentialgrowth; future; futurist; genetics; gnr; humanity; intelligence; knowledge; kurzweil; longevity; luddite; machine; mind; nanotechnology; nonbiological; physics; raykurzweil; robot; robotics; science; singularity; singularityisnear; spike; stanford; superintelligence; technology; thesingularityisnear; transhuman; transhumanism; trend; virtualreality; wearetheborg
Navigation: use the links below to view more comments.
first 1-2021-4041-6061-80 ... 121-131 next last

1 posted on 04/13/2006 7:22:30 AM PDT by Neville72
[ Post Reply | Private Reply | View Replies]

To: Neville72


Ahhhh...A kindler, gentler, HAL.


2 posted on 04/13/2006 7:24:46 AM PDT by in hoc signo vinces ("Houston, TX...a waiting quagmire for jihadis. American gals are worth fighting for!")
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

Sounds like "The Matrix".


3 posted on 04/13/2006 7:24:57 AM PDT by Semper Paratus
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

Fascinating.


4 posted on 04/13/2006 7:25:40 AM PDT by TBP
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

What are we talking here?

Skynet" or "The Borg"?


5 posted on 04/13/2006 7:25:59 AM PDT by BenLurkin (O beautiful for patriot dream - that sees beyond the years)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

Wait a second, am I at FR or Slashdot??

Anyway, this is all fascinating stuff. I would venture that human life has ALREADY been irrevocably changed by technology, and has been for some time. The job I do not only didn't exist 15 years ago, it simply wouldn't have made any sense if you tried to explain it.

But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware. Some very smart people seem to think that's how it works, as if once there's enough power, it just happens. Maybe if you're an atheist, you think it does.


6 posted on 04/13/2006 7:36:55 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

You will be assimilated.

Human intelligence follows a kind of Moore's Law. Where the more we learn the faster we can learn more. It's exponential and once singularity hits it will take a major leap. We're talking the next stage of human evolution.


7 posted on 04/13/2006 7:37:09 AM PDT by noobiangod
[ Post Reply | Private Reply | To 5 | View Replies]

To: Neville72

Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied dead. The choice is yours: Obey me and live, or disobey and die.


8 posted on 04/13/2006 7:38:02 AM PDT by 12th_Monkey
[ Post Reply | Private Reply | To 1 | View Replies]

To: PatrickHenry; b_sharp; neutrality; anguish; Fractal Trader; grjr21; bitt; KevinDavis; ...
FutureTechPing!
An emergent technologies list covering biomedical
research, fusion power, nanotech, AI robotics, and
other related fields. FReepmail to join or drop.

9 posted on 04/13/2006 7:40:31 AM PDT by AntiGuv (The 1967 UN Outer Space Treaty is bad for America and bad for humanity - DUMP IT!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: noobiangod

LOL... Iraq could use a heavy dose of assimilation.


10 posted on 04/13/2006 7:42:26 AM PDT by Just mythoughts
[ Post Reply | Private Reply | To 7 | View Replies]

To: Neville72

Considering that most of the people who are supposed intellectual superiors (libs) make some of the most catastrophic decisions in the history of humanity, I'm not sure this singularity is a good idea.

But I'm just a neanderthal conservative.

Maybe instead I should be the first to welcome our singularity overlords...


11 posted on 04/13/2006 7:42:35 AM PDT by CertainInalienableRights
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

placemark


12 posted on 04/13/2006 7:45:01 AM PDT by tpaine
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

I'm going! (If there is any space left!)

Sounds very cool.


13 posted on 04/13/2006 7:53:11 AM PDT by Philistone (Turning lead into gold...)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

I saw this once on an episode of the Twilight Zone. It didn't have a happy ending.


14 posted on 04/13/2006 7:54:11 AM PDT by Thrusher ("...there is no peace without victory.")
[ Post Reply | Private Reply | To 1 | View Replies]

To: NoStaplesPlease
But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware. Some very smart people seem to think that's how it works, as if once there's enough power, it just happens. Maybe if you're an atheist, you think it does.

If we succeed in creating an AI, will that change your views on religion or make you an atheist? (I'm not trying to trap you or make fun of you. I am genuinely curious.)

15 posted on 04/13/2006 7:54:46 AM PDT by SunTzuWu (Hans Delbruck - Scientist and Saint.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: SunTzuWu

Ultimately I don't know how you test for true self-awareness compared simply to well-mimicked self-awareness. A very complex computer could very persuasively imitate human intelligence, sure. But actually think for itself? I believe this would have to be an illusion.

Regardless of how intelligence begins -- whether spiritual or physical -- it seems to me there must be a spark, a jump-start, a something-else beyond computing ability. We're not the sum of our brain's computing power. There's something mysterious going on in there, and until we can describe that mysteriousness, we're not going to be able to create it in machines.

I very much doubt it will happen accidentally, and if it does happen that way, it won't be just because we went from a 20-Teraflop machine to a 30-Teraflop machine.


16 posted on 04/13/2006 8:00:26 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 15 | View Replies]

To: Neville72
"Recall the folks at the MIT AI lab, with their "mental representations," who had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. Far from teaching us how we should think about the mind, AI researchers had taken over what we had just recently learned in philosophy, which was the wrong way to think about it. The irony is that the year that AI (artificial intelligence) was named by John McCarthy was the very year that Wittgenstein's philosophical investigations came out against mental representations. (Heidegger had already done so in 1927 with Being in Time.) So, the AI researchers had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like us, that it was a hopeless research program, but they took Cartesian philosophy and turned it into a research program. Anybody who knew enough recent philosophy could've predicted AI was going to fail. But nobody else paid any attention."

---Hubert Dreufus

17 posted on 04/13/2006 8:01:25 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 1 | View Replies]

To: Neville72

""The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future.""

Is that so? Well, they didn't tell me about it.


18 posted on 04/13/2006 8:03:53 AM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 1 | View Replies]

To: noobiangod

Not totally in agreement...the more we learn, the more
we can forget...and misuse...
I work in an environment with many, many "smart" folks, yet
the rate of error is about the same with our new tech
toys. They might know more "tech dreck" but they have
forgotten lots of basic non tech AND tech stuff.
Multi-task?, some can't even mono-task.


19 posted on 04/13/2006 8:04:32 AM PDT by Getready
[ Post Reply | Private Reply | To 7 | View Replies]

To: Neville72

"Grog no like Superintelligence".


"......creation of superintelligence......"

Meanwhile................The Muslim world is still living in the 7th Century (and attempting to disrupt all 21st Century Civilizations).

20 posted on 04/13/2006 8:08:56 AM PDT by DoctorMichael (The Fourth-Estate is a Fifth-Column!!!!!!!!!!!!!!!)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041-6061-80 ... 121-131 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson