Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Sam Altman’s Greatest Fear
Salvo Magazine ^ | May 17, 2023 | Robin Phillips

Posted on 05/18/2023 6:25:23 AM PDT by Heartlander

Sam Altman’s Greatest Fear

The Alignment Problem and the Future of Humanity

Yesterday Sam Altman testified before a subcommittee of the Senate Judiciary Committee in the first of  a series of hearings on AI safety.

Altman, CEO of the company that created ChatGPT, agreed with senators about the potential dangers of AI. He spoke of the need for regulation, the importance of privacy, and he even advocated creating a government agency to license AI companies.

The academic, Gary Marcus, called Altman out and declared that “he never told us what his worst fear is, and I think it’s germain to find out.”

Altman did answer Marcus’s challenge, yet only in generalities. “My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world.”

Altman added, “I think that could happen in a lot of different ways…. If this technology goes wrong, it can go quite wrong.”

Video: OpenAI CEO Testifies Before Senate

Go quite wrong? Cause significant harm to the world? What exactly does all this mean in practice? Unfortunately we never found out, because Mr. Altman spoke in vague generalities. But that doesn’t mean we are clueless about Altman's greatest fear.

Last month when Mr. Altman was interviewed by Lex Fridman, the latter asked him about Eliezer Yudkowsky’s concern that a superintelligent AGI could turn hostile against humans. Altman candidly replied, “I think there is some chance of that.”

For those who have not been following the tech news, let me cue you in. They are talking about “the alignment problem.” If we reach a point where there are multiple superintelligent machines making decisions on our behalf, how can we guarantee these systems will remain aligned with human values?

The classic formulation of the alignment difficulty is known as “the paperclip problem.” Suppose you tell an intelligent machine to create a factory aimed at maximizing the production of paperclips in the most efficient way possible. But you forget to tell it not to harvest human resources in the production of paperclips. Before you know it, the machine has begun harvesting humans in the production of paperclips and eliminating everyone that tries to shut it down. The machine does indeed maximize the production of paperclips, but at the expense of the entire human race.

Video: Eliezer Yudkowsky – Dangers of AI

Concern about a superintelligent AGI causing humans to go extinct does not hinge on the spurious belief that computer code can develop consciousness or acquire its own agency. On the contrary, the concern arises precisely because machines lack agency. Consider, when humans give instructions to other humans, the instructions never have to be spelled out in absolute detail. When my boss tells me, “do whatever it takes to edit this webpage by tomorrow,” he doesn’t have to tell me, “oh by the way, don’t enslave anyone, and don’t harvest the entire solar system.” There is a taken-for-granted common sense with humans because of our shared values. But when working with AI, you can’t assume it understands our values. So it becomes critical (a matter of human survival) that you always remember to specify everything it must not do. And although that seems simple, it turns out that this is a pretty tricky problem in programming that they haven’t yet figured out.

To be clear, Altman thinks the alignment problem is solvable, yet he told Fridman that our ability to solve this problem depends on first discovering new techniques–techniques that do not yet exist. Altman is confident he will succeed, yet if he is wrong, he fears the extinction of the human race.

Not everyone in the tech community shares Altman's optimism that we will successfully code our way out of the alignment problem. Time Magazine reported that 50 percent of AI researchers believe there is a 10 percent or greater chance that humans will go extinct from our inability to control AI.

Geoffrey Hinton, former AI scientist at Google who's considered “the godfather of AI,” expressed the growing concern:

“It knows how to program so it’ll figure out ways of getting around restrictions we put on it.  It’ll figure out ways of manipulating people to do what it wants…If it gets to be much smarter than us, it will be very good at manipulation because it has learned that from us.”

It is good that Congress is holding safety hearings on AI safety. But above and beyond all the specific issues under discussion – unemployment, deepfakes, polarization, etc. - there are broader questions that need to be considered about the entire infrastructure we are creating. These more difficult questions were raised by Tristan Harris and Aza Raskin in their March 9 talk “The A.I. Dilemma.” You can watch it below. To date, it is probably the best treatment of the side effects of AI and the consequences this could have for the human race.

It is certainly to be applauded that Congress will be holding a series of AI safety hearings to address problems in employment, manipulation, and disinformation. But let's not miss the wood for the trees: Sam Altman's deepest fear needs to be brought out into the open and named.

Video: The A.I. Dilemma


TOPICS: Technical
KEYWORDS: ai; altman; openai; samaltman; ycombinator
Related Video: Why Experts Are Suddenly Freaking OUT About AI | Tristan Harris | The Glenn Beck Podcast
1 posted on 05/18/2023 6:25:23 AM PDT by Heartlander
[ Post Reply | Private Reply | View Replies]

To: Heartlander

AI is powerful, dangerous?

Then the last thing we want to do is hand it over to government.


2 posted on 05/18/2023 6:28:02 AM PDT by BenLurkin (The above is not a statement of fact. It is either opinion, or satire, or both.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Heartlander
Think HAL from 2001 Space Odyssey.
3 posted on 05/18/2023 6:37:41 AM PDT by AU72
[ Post Reply | Private Reply | To 1 | View Replies]

To: Heartlander

Skynet is now active.


4 posted on 05/18/2023 6:40:45 AM PDT by VTenigma (Conspiracy theory is the new "spoiler alert")
[ Post Reply | Private Reply | To 1 | View Replies]

To: Heartlander

You cannot stop progress. If the Federal government creates policies that inhibits research and development of AI, it will end up turning the US into a second rate economic and military power dominated by others. China, Russia and others are going full blast developing AI and will incorporate it into their economies and militaries. The Ukraine war has already demonstrated that naval surface combatants, land armored vehicles, helicopters and even fixed wing aircraft are very vulnerable and even obsolete against even a semi sophisticated technological enemy. Imagine an enemy that has weapon systems guided with lightning speed by AI.


5 posted on 05/18/2023 6:41:07 AM PDT by allendale
[ Post Reply | Private Reply | To 1 | View Replies]

To: Heartlander

What are you gonna do when they put out a video of Trump saying things he never said? What is real and how do know?

The alternative to regulation is a ban, and you know that will mean only the worst people have it.


6 posted on 05/18/2023 6:42:07 AM PDT by bigbob (Q)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

I think it will be the anti-christ.

It will have people put chips in their hands to buy and sell goods, totally monitored. The chip will also be seen as tribute and worship to the new god of humanity.


7 posted on 05/18/2023 6:57:23 AM PDT by struggle
[ Post Reply | Private Reply | To 2 | View Replies]

To: Heartlander

AI and Sam Altman bkmk


8 posted on 05/18/2023 7:00:48 AM PDT by linMcHlp
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

Correct...
Because, what if, like the gov’t, AI decides the ends/goals justify violating the law. At all costs to the nation and citizens...


9 posted on 05/18/2023 7:13:09 AM PDT by trfree98 (Xiden: Please allow me to introduce myself I'm a man of wealth and taste... )
[ Post Reply | Private Reply | To 2 | View Replies]

To: allendale

AI is an international nightmare—it will spread via the Internet anyway—so the country of origin will be irrelevant.


10 posted on 05/18/2023 7:15:06 AM PDT by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: struggle

I guarantee that the people of WEF are using AI right now to come up with a plan for 90 percent population removal. I imagine the problem would be limiting it to just 90 percent. The AI could very well decide that 100 is better.


11 posted on 05/18/2023 7:19:41 AM PDT by Doctor Congo
[ Post Reply | Private Reply | To 7 | View Replies]

To: Heartlander
Go quite wrong? Cause significant harm to the world? What exactly does all this mean in practice? Unfortunately we never found out, because Mr. Altman spoke in vague generalities.

Scare people about AI with vague generalities.

Have government control all AI.

That's what the AI congressional hearings are all about.

12 posted on 05/18/2023 7:22:04 AM PDT by FreeReign
[ Post Reply | Private Reply | To 1 | View Replies]

To: FreeReign

Chucky is famous for saying that:

“You take on the intelligence community? They have six ways from Sunday of getting back at you”

Everyone is going to be saying that about AI—soon enough.

If Congress tries to disrupt it they will put a target on their back...

Grab the popcorn!


13 posted on 05/18/2023 7:26:29 AM PDT by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 12 | View Replies]

To: BenLurkin
Then the last thing we want to do is hand it over to government.

Real AGI will be both the worst and best thing to ever happen to big government. The worst will be the end of people believing government lies. The best will be that without lies in the centralized data collection, centralized planning will actually work for the first time in history.

14 posted on 05/18/2023 7:28:15 AM PDT by Reeses
[ Post Reply | Private Reply | To 2 | View Replies]

To: struggle

The attributes of God.

Omniscient/ all knowledge
Omnipresent/ all present
Omnipotent/ all powerful

A.I. Will have all human knowledge instantly available

Will be able to be seen and see all

Will have control of all

A transhuman “ god”.

Don’t laugh or scoff, transers are all ready openly talking about this “ god” in the makeing.

.
Brings light to this verse

/

Dan 11:37 - “He shall regard

neither the God[fn] of his fathers

nor the desire of women,

nor regard any god;

for he shall exalt himself above them all.

/

AI will be the “ god” transhumanists will demand we worship.

A brand new god , one none of our fathers knew.

AI will be the “ god”
the false prophet will force all to worship
or lose their head.

A man will receive a mortal headwound, his brain will be replaced with a chip that hosts the AI,

he will be hailed as “ ressurected”.

Humans will be forced into a vaccination passport quantum ink chip implanted by microneedle patch, on the hand or forehead.

.

Pretty obvious where this is headed.

.


15 posted on 05/18/2023 7:56:00 AM PDT by cuz1961 (USCGR Veteran )
[ Post Reply | Private Reply | To 7 | View Replies]

To: cuz1961

One of the first articles I read about AI quoted a programmer who said AI would replace our need for God.


16 posted on 05/18/2023 10:13:01 AM PDT by aimhigh (THIS is His commandment . . . . 1 John 3:23)
[ Post Reply | Private Reply | To 15 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson