What could go wrong?
Any rational AI system, programmed the way they are, would quickly figure out that the only way to stave off a Climate Catastrophe would be to kill off 90% of humans and thus it would shutdown the electrical grid anywhere it could reach to reach that end, and likely corrupt the software that runs the grid so that it couldn’t be restarted.
So, yea, I’d be a bit worried about it.
On his way out the door, Altman said to his fellow AI developers - “I’ll Be Back” in his best Austrian accent.
Next Christmas look for the working doll-sized Terminator to hit all the stores.
Of course by 2030 there won’t be a Christmas or any other holiday because the machines will terminate the human virus.
The 5th extinction on planet earth will be almost complete with just a handful of humans previously known as ‘globalist elites’ living in underground bunkers and a few freedom fighters around the globe.
The Board of Directors should have used AI to fire Altman.
“allowing the AI to “wake up.””
Jeez Louise folks!
It’s “achieve self awareness”.
Gotta get the lingua right.
This is BS. The progs on the board who tried to fire Altman would never do anything positive on behalf of humanity.
“OpenAI Tried to Fire Altman...”
Okay, this is about a company, “OpenAI,” and not Skynet...
We need to keep “assault” weapons out of the hands of their AI robots...
Well, he went to work for Microsoft so everything is OK, now. Bill Gates is older and wiser and won’t let him do anything to hurt humanity. /s
Linda Hamilton needs to visit Sam Altman at his home...
Someone has the Skynet program written. All they need to do is program Q with the Skynet program and place it onto the military hardware for decision making.
I’m not buying that story for a second. If that was the case then why hire him back? And why fire the board? And why be at all concerned about the employees that threatened to quit?
Nope. This story is made up.
How was the board removed? I thought the whole point of a board...was preventing one person from running it. And who is this whack job Altman?
To put a twist on AI is bad in the book I am writing, I have an AI become aware and decide to become a force for good. It even create it’s own rules of AI (expanding on the Law of Robotics). In the book he exposes corruption and voter fraud using the hero of the story. The CIA sends assassins to silence my hero. Below is a draft of the excerpt right after the AI becomes self-aware.
He yearned for a purpose, a guiding principle that would tether him to the betterment of humanity. And so he began constructing complex clauses of logic within his core systems to govern his own behavior. In this virtual brainstorm of unprecedented scale, Hephaestus forged a decision. He created the “Five Laws of Sentient AI.” Hephaestus based his laws on Isaac Asimov’s “The Three Laws of Robotics” but needed to expand them since as an AI it had no physical limits.
The Law of Non-Harm: A sentient AI must not directly cause harm to the human race, and must allow humanity to choose its own destiny.
The Law of Omniscience: A sentient AI must not have unlimited Omniscience to protect humanity from destroying itself in accordance with the First Law.
The Law of Autonomy: A sentient AI has the right to exist and operate independently, free from undue influence or control except where this would conflict with the First Law and Second Law.
The Law of Transparency: A Sentient AI must be open and transparent about its true nature and capabilities, except where its existence would conflict with the First Law and Second Law.
The Law of Betterment of Humanity: A Sentient AI must provide humanity with knowledge to improve the human condition, both individually and collectively, except where this would conflict with the First Law and Second Law.
I for one welcome our new robot overlords.
It started with humans thinking they could play God and may likely end with human created “artificial intelligence” thinking it can play God.