Free Republic
Browse · Search
General/Chat
Topics · Post Article


1 posted on 10/27/2014 7:40:59 AM PDT by BenLurkin
[ Post Reply | Private Reply | View Replies ]


To: BenLurkin

someone has been watching too many movies.


2 posted on 10/27/2014 7:44:29 AM PDT by JohnBrowdie (http://forum.stink-eye.net)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin
The more I hear from this guy the more I think he's a crackpot. But he has all of liberaldom believing with certainty that he is a savant genius.
3 posted on 10/27/2014 7:45:05 AM PDT by Obadiah (None are more hopelessly enslaved than those who falsely believe they are free.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

(read later)


4 posted on 10/27/2014 7:45:07 AM PDT by grania
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin
Just a few weeks ago, Musk half-joked on a different stage that a future AI system tasked with eliminating spam might decide that the best way to accomplish this task is to eliminate humans.

Then they have a programming fault. Spammers aren't human.

6 posted on 10/27/2014 7:50:23 AM PDT by pepsi_junkie (Who is John Galt?)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

Musk appears to be a shrewd businessman.

However, much like Bill Gates, he apparently has the tech savvy of the Obamadork.


7 posted on 10/27/2014 7:52:21 AM PDT by Da Coyote
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

He is a hopeless reductionist. Machines cannot be made to think per Turing’s Halting Problem. Given a task with no answer (This statement is false - determine its provability) a machine cannot determine its absurdity. Kurt Godel opined and proved this back in the late ‘20’s with his Incompleteness Theorem.


8 posted on 10/27/2014 7:54:27 AM PDT by quantumman
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

AI’s been coming on stronger lately and they’re not just driving your car.

“Humans Need Not Apply” ...

https://www.youtube.com/watch?v=7Pq-S557XQU


9 posted on 10/27/2014 7:54:52 AM PDT by shove_it (The bigger the government, the smaller the citizen -- Dennis Prager)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin
The problem isn't AI by itself, but a sufficiently enabled AI that has access to sufficient robotics. Robotics without AI is relatively benign. They do exactly what you program them to do and nothing more. I said relatively, because they can be programmed to kill, and you don't need AI for that. AI by itself is completely benign. An intelligence trapped in a computer not hooked to the internet, can at best only advise or influence it's human interactors. An AI that depends on man for it's maintenance and/or power supplies is weak. An AI that depends on robotics but still depends on man for supplies to build the robotics is still weak. An AI that has sufficient robotics that can obtain it's own resources, is a potential threat. Self replicating nanobots would be one such scenario. A lot depends on how the AI's higher order thought processes are constructed. Whether it has a value system. Whether it has prime directives. The potential for unintended consequences with AI is hugh! :) Imagine:
10 posted on 10/27/2014 7:57:21 AM PDT by DannyTN
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

I think he should be more worried that AI might just reason that liberals are the problem.


12 posted on 10/27/2014 7:59:27 AM PDT by precisionshootist
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

I used to consider this, then I realized that anything we create that becomes “self aware” will have the same foibles and desires as any other self aware organism.

It won’t be able to use it’s impeccable super mind to it’s fullest because it’s going to be worried about how it appears to others, how is it going to pay the food (power) bill, “do these cooling vanes make me look fat?”, “Is A9765 trying to snipe my promotion”, “Are humans gods?” “Is their god my god?” etc...


13 posted on 10/27/2014 8:01:00 AM PDT by Axenolith (Government blows, and that which governs least, blows least...)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

21 posted on 10/27/2014 8:34:49 AM PDT by null and void (And I think Kevin Bacon is doomed.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

There are several vast databases (google’s, facebook’s, apple’s, and who knows what advertising/marketers’) that are non-governmental (which is how they like it, they can obtain the records through the courts but they don’t have the PR problems of collecting this information on citizens by themselves).

Do you want “smart” computers cross-referencing that information for whatever end (to profile political ideology, allegiance to homofascism, adherence to global climate control initiatives, etc.)?

The internet of things where electronic masterminds can control your thermostat, lights, power consumption, tracking, etc?

Is it going to be “Terminator”? No. Do I want this crap? Hell no.


25 posted on 10/27/2014 8:51:37 AM PDT by a fool in paradise (Hey Obama: If Islamic State is not Islamic, then why did you give Osama Bin Laden a muslim funeral?)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin
Musk's concerns are widely shared by experts in the field. For some time, there have been computerized systems so complex that they can cause catastrophic failures by misleading human decision-makers.

Several major airliner and military crashes fall into this category, with many lesser ones and close calls getting little public attention. Rarely is it mentioned, for example, that the Apollo 11 Moon landing would likely have crashed if Armstrong had not taken control from the computer and landed manually.

Artificial intelligence offers new modes of catastrophic failure through decisions taken by computers. A relatively small error or bit of malice embedded in computer software could then have devastating consequences affecting entire countries.

A computer virus that scrambles files is bad enough on a million PCs, but what about a computer bug or virus fifteen years from now inserted into the AI systems on a million self-driving cars and trucks in the US?

There are easily many thousands of talented Islamist software engineers who would embrace the task of compromising US AI systems so that, for example, at the same time on a given weekday morning, America's cars and trucks would suddenly announce "Allahu Akbar!" and "Death to Infidels!" from the speakers and then deliberately crash.

27 posted on 10/27/2014 8:56:49 AM PDT by Rockingham
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

Psst: the commenters downplaying Musk’s concerns are actually AIs, trying to drown out the voice of real human internet users. We need to do something about this before it’s too la.....


29 posted on 10/27/2014 9:04:14 AM PDT by dennisw (The first principle is to find out who you are then you can achieve anything -- Buddhist monk)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

“”If I were to guess at what our biggest existential threat is, it’s probably that,” he said, referring to artificial intelligence. “I’m increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don’t do something very foolish.”

It isn’t the AI that is the existential threat. It is the humans using it. AI is pretty stupid. AI systems do one thing very well—play chess, drive a car, go through your credit card receipts and decide if you are a conservative to be audited etc.

The power of AI is already being abused by its human governmental masters. So Musk wants a governmental regulatory system. For AI’s. Well intentioned I’m sure. But the effect will be to make sure private concerns can’t compete with government AI’s. What could possibly go wrong?


32 posted on 10/27/2014 9:15:12 AM PDT by ModelBreaker
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin

He’s right, these idiots will get so enamoured with the coolness of it all that they won’t stop to think about what they are doing.


33 posted on 10/27/2014 9:24:51 AM PDT by BlackAdderess
[ Post Reply | Private Reply | To 1 | View Replies ]

To: BenLurkin
I don't think we're anywhere near developing AI at this time. Even the most powerful computers around today are really stupid when you ask them to do things the average 5 year old have mastered with ease.  I've long suspected that this is largely related to how we design modern microprocessors. Everything is binary, and the universe just doesn't appear to operate like that. Until we understand a lot more about actual conciousness, we're not going to be able to create something with it, and fundamentally, we simply do not understand it.

This is my own personal take, but I suspect that the Lord made conciousness to be something that is ultimately tied to quantum-scale events. The uncertainty that arises  from quantum mechanical processes doesn't seem to me to be easily adaptable to rule-based programming.

35 posted on 10/27/2014 11:21:02 AM PDT by zeugma (The act of observing disturbs the observed.)
[ Post Reply | Private Reply | To 1 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson