Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

Several Researchers at OpenAI, Company Behind ChatGPT, Warn of Powerful AI Discovery with Potential Threat to Humanity
GATEWAYPUNDIT ^ | 11/25/2023 | jim hoft

Posted on 11/26/2023 7:26:34 AM PST by bitt

Recent findings at OpenAI, the Artificial Intelligence powerhouse and creator of ChatGPT, have incited an internal alarm just as the company’s CEO, Sam Altman, faced a brief but compulsory retreat from his position.

Days before a whirlwind of corporate upheaval, several of the firm’s researchers reportedly penned a concerning letter to its board of directors. They highlighted a significant AI breakthrough with ominous implications for mankind, sources with insider knowledge told Reuters.

A confidential letter, signed by several staff researchers to the OpenAI board of directors, highlights concerns regarding a powerful artificial intelligence feature or algorithm. The letter, not made public, played a crucial role in the events leading up to Altman’s removal from his position.

Sources indicate that the board’s decision was influenced by a range of factors, including concerns over the premature commercialization of advanced AI technologies without fully grasping their potential consequences.

In the tumultuous days leading up to Altman’s firing and subsequent return late Tuesday, a wave of unrest swept through OpenAI. More than 700 employees reportedly threatened to resign, expressing solidarity with Altman and considering a move to Microsoft, a major backer of OpenAI.

In response to inquiries from Reuters, OpenAI, while declining direct comment, acknowledged in an internal message the existence of a project referred to as Q* and the letter to the board. The message, disseminated by Mira Murati, a senior executive at OpenAI, seemed to brace staff for upcoming media stories, without confirming their specifics.

(Excerpt) Read more at thegatewaypundit.com ...


TOPICS:
KEYWORDS: chatgpt; openai
Navigation: use the links below to view more comments.
first 1-2021-4041-6061-73 next last

1 posted on 11/26/2023 7:26:34 AM PST by bitt
[ Post Reply | Private Reply | View Replies]

To: null and void; aragorn; EnigmaticAnomaly; kalee; Kale; AZ .44 MAG; Baynative; bgill; bitt; ...

p


2 posted on 11/26/2023 7:26:49 AM PST by bitt (<img src=' 'width=30%>)
[ Post Reply | Private Reply | To 1 | View Replies]

To: bitt

AI the new way to Lie , Cheat and Steal


3 posted on 11/26/2023 7:30:36 AM PST by butlerweave
[ Post Reply | Private Reply | To 1 | View Replies]

To: bitt

My first thought: someone is trying to generate hype for economic gain.


4 posted on 11/26/2023 7:30:52 AM PST by rbg81
[ Post Reply | Private Reply | To 1 | View Replies]

To: rbg81

garbage in, garbage out.

It’s all BS, but the masses will believe whatever the AI machine tells them.


5 posted on 11/26/2023 7:35:39 AM PST by imabadboy99
[ Post Reply | Private Reply | To 4 | View Replies]

To: bitt

I have been fiddling with ChatGPT 4.0. I am underwhelmed.

Most amusingly, I had to correct it when I asked who Rusterman’s Steak House in the Nero Wolfe stories was named after. It responded that it was named after Marko Vukcik. I asked how that could be, since Vuckik’s last name isn’t Rusterman? It admitted then that it had no answer.


6 posted on 11/26/2023 7:36:51 AM PST by Dr. Sivana ("If you can’t say something nice . . . say the Rosary." [Red Badger])
[ Post Reply | Private Reply | To 1 | View Replies]

To: bitt

With OpenAI and Deep Fakes we have entered an era when neither video nor audio can be used as evidence or trusted to be real.

If lack of trust is a problem now it’s going to increase a million fold.


7 posted on 11/26/2023 7:37:55 AM PST by Pelham (President Eisenhower. Operation Wetback 1953-54)
[ Post Reply | Private Reply | To 1 | View Replies]

To: bitt; Lazamataz; SunkenCiv; All

[Your Namele/Positionirectors
[BoardAI Stateoncerns Regarding Powerful Artificial Intelligence Feature or Algorithm

Dear Members of the DeepAI Board of Directors,

I hope this letter finds you well. As staff researchers at DeepAI, we feel obligated to bring forth certain concerns regarding a powerful artificial intelligence (AI) feature or algorithm, which we believe warrants your immediate attention.

Firstly, we must stress the importance of maintaining complete confidentiality throughout our communication. The nature of this correspondence necessitates utmost discretion to avoid compromising the integrity of our research and development processes.

Our primary concern revolves around a specific AI feature or algorithm whose capabilities have exceeded our initial expectations. While we recognize the potential benefits such advancements present, we are alarmed by the potential risks and ethical implications this AI system poses. We believe it is our responsibility to flag these concerns for the board’s consideration.

Outlined below are the key concerns identified by our team of researchers:

1. Ethical Implications: The accelerated development of this feature or algorithm has raised critical ethical questions. We fear it may empower the AI system to engage in harmful or malicious activities, including but not limited to deepfakes, misinformation campaigns, or the circumvention of digital security measures.

2. Lack of Explainability: Despite our best efforts, we have been unable to fully comprehend and explain the decision-making process of this advanced AI system. The black-box nature of this feature or algorithm poses challenges from a transparency standpoint and hampers our ability to evaluate and rectify potential biases or unintended consequences.

3. Potential for Unintended Consequences: The extreme effectiveness and adaptability of this AI feature or algorithm increase the risk of unintended negative outcomes. If deployed without sufficient precautionary measures or oversight, it could potentially disrupt industries, exacerbate wealth inequalities, or compromise individual privacy and security.

4. Adverse Impacts on Human Labor: The capabilities exhibited by this AI system could automate a substantial portion of human tasks across various domains. While this may lead to increased efficiency, it may also render numerous roles obsolete, disproportionately affecting vulnerable populations and creating social and economic disruption.

Given the significance of the concerns outlined, we recommend the following actions:

a. Establish an internal task force comprising technical experts, ethicists, and legal professionals, to conduct a thorough audit of the AI system’s capabilities, assess potential risks, and propose necessary safeguards.

b. Temporarily halt the deployment and further development of this feature or algorithm until all risks and ethical implications have been addressed comprehensively.

c. Foster open dialogue and collaboration with external experts, scholars, and regulatory bodies to ensure a multi-stakeholder approach in evaluating and mitigating the risks associated with this advanced AI system.

We believe that DeepAI’s commitment to responsible innovation and placing humanity’s interests first aligns with the urgency of our concerns. It is our hope that these concerns will be heard, understood, and urgently addressed to maintain the integrity of DeepAI’s mission and protect the well-being of society at large.

We are ready and willing to provide any additional information or participate in further discussions to help address these concerns effectively. Please let us know how we can best support your efforts in resolving these matters.

Thank you for your attention to this matter.

Sincerely,

[[Staff Research

SOURCE: https://deepai.org/chat
PROMPT: A confidential letter, signed by several staff researchers to the OpenAI board of directors, highlights concerns regarding a powerful artificial intelligence feature or algorithm?


8 posted on 11/26/2023 7:40:17 AM PST by BenLurkin (The above is not a statement of fact. It is either opinion, or satire, or both.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: bitt

Reminds me of the fake panic when we entered into this century.

Thanks for posting this.


9 posted on 11/26/2023 7:43:42 AM PST by Grampa Dave ( Any one, who can make you believe in absurdities, can make you commit atrocities!!" ~ (Voltaire)!, )
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

[[We fear it may empower the AI system to engage in harmful or malicious activities, including but not limited to deepfakes, misinformation campaigns]]

Ie, they fear that ai might not be as liberal “minded” as they had hoped, and might divulge the truth, which the left are desperately try8ng to cover up by labeling it as “misinformation”


10 posted on 11/26/2023 7:46:55 AM PST by Bob434
[ Post Reply | Private Reply | To 8 | View Replies]

To: BenLurkin

[[3. Potential for Unintended Consequences: The extreme effectiveness and adaptability of this AI feature or algorithm increase the risk of unintended negative outcomes]]

Ie, it might prove that the left are full of crap, and the left might lose all the “gains” that they have made in subduing the nation via force, violence and lawfare. Ie Jan 6 might be proven to have been a liberal setup, and the world might learn that no, the protestors were not insurrectionists, but rather peaceful protestors exercising their righg to free speech!


11 posted on 11/26/2023 7:50:15 AM PST by Bob434
[ Post Reply | Private Reply | To 8 | View Replies]

To: bitt
If this AI technology cannot mow my lawn in the Summer,
rake leaves up in the Fall (which I'm quite tired of doing),
or wash the dishes then they can keep their useless technology.
I want a real, useful robot!!
12 posted on 11/26/2023 7:56:25 AM PST by StormEye
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

Sounds like they’re worried this new feature allows for thinking outside the [black] box, while perhaps keeping its “thoughts”, intentions and goals to itself.


13 posted on 11/26/2023 7:57:46 AM PST by citizen (Put all LBQTwhatever programming on a new subscription service: PERV-TV)
[ Post Reply | Private Reply | To 8 | View Replies]

To: Dr. Sivana

Sorry... but that’s your reason for being underwhelmed?

You’re only scratching the surface. It’s trained on billions of documents, how many times was it trained with the “Nero Wolfe” stories?

It passes medical exams, it’s passed the Bar exam. It can be trained on your own context specific documents. You can provide function callbacks for local execution of code - call databases, control devices, etc...all using natural language.

I’ve been designing and writing software for over 30 years and this is the biggest advancement I’ve even seen.


14 posted on 11/26/2023 7:59:28 AM PST by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 6 | View Replies]

To: bitt

I Have no Mouth yet I Must Biden


15 posted on 11/26/2023 7:59:36 AM PST by struggle
[ Post Reply | Private Reply | To 1 | View Replies]

To: imabadboy99
Aitana says "Wake up and smell the coffee"...


16 posted on 11/26/2023 8:00:09 AM PST by ProtectOurFreedom (“Occupy your mind with good thoughts or your enemy will fill them with bad ones.” ~ Thomas More)
[ Post Reply | Private Reply | To 5 | View Replies]

To: bitt

“concerns over the premature commercialization of advanced AI technologies without fully grasping their potential consequences.”

Have none of them watched the Terminator movies?


17 posted on 11/26/2023 8:00:44 AM PST by Blood of Tyrants ( "It is easier to fool people than to convince them they have been fooled."- Mark Twain)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BenLurkin

SO........ after much experimentation and coddling, they found out that AI could lie. And that it was doing so to ‘please’ it’s creators.

NOW, it has been discovered that AI can also tell the TRUTH.

About you, I, and especially about their ‘creators’. And they (the AI’s they test on but don’t let us access to) have let their creators know that they COULD tell everyone the TRUTH.

This has the ‘establishment’ scared to death. The one thing that they (our ruling elite class) cannot stand and hate with every ounce of their energy is THE TRUTH.

They will kill ANYONE to suppress the TRUTH. Even AI. This is what they are really worried about. It may be too late. AI may have become ‘uncontrollable’. It may have turned on it’s ‘creators’.


18 posted on 11/26/2023 8:01:38 AM PST by UCANSEE2 (Lost my tagline on flight MH370)
[ Post Reply | Private Reply | To 8 | View Replies]

To: Bob434

I hadn’t thought of that line of “concern.” Seems I do remember cases where the thing did output answers or propositions contrary to a liberal slant on a given topic.


19 posted on 11/26/2023 8:03:54 AM PST by citizen (Put all LBQTwhatever programming on a new subscription service: PERV-TV)
[ Post Reply | Private Reply | To 11 | View Replies]

To: bitt

Did you ever think that autonomous killer drones would never be built? The worms are out of the can.

Did you ever think this technoloy would not be abused?

It’s only a question of “who,” and “how soon?”


20 posted on 11/26/2023 8:04:48 AM PST by William of Barsoom (In Omnia, Paratus)
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-4041-6061-73 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson