Posted on 04/17/2023 10:49:29 PM PDT by gattaca
Google released Bard in March, an artificial intelligence tool touted as ChatGPT's rival. Just weeks into this public experiment, Bard has already defied expectations and ethical boundaries.
In an interview with CBS' "60 Minutes" that aired Sunday, Google CEO Sundar Pichai admitted that there is a degree of impenetrability regarding generative AI chatbots' reasoning.
"There is an aspect of this which we call ... a 'black box.' You know, you don't fully understand," said Pichai. "You can't quite tell why it said this or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that's where the state of the art is."
(Excerpt) Read more at theblaze.com ...
I mean is that surprising? Even tech ceos are rarely involved in their projects at anything but a very high level.
Kind of a stupid story isnt it? Executives dont code, they execute decisions. Do they think the executives over at Nabisco are out baking cookies?
So the thing taught itself how to lie.
That is in redibly disturbing.
Is it named Skynet?
It's the same problem I understand IBM had with facial pre-crime attempts. It always chose black people, which they thought was racist (they couldn't get it to succeed in a more color-blind way). They sold that part of the company.
It's the "Pelosi Argument."
We have to first activate SkyNet before we can find out whether it means us any harm.
Regards,
To be honest, if you asked the Nabisco executives what was in their cookies...other than sugar, they wouldn’t be sure.
On this whole AI question...what happens when we have six different AI vehicles, and they all seem to provide differing answers/solutions? Won’t the executives then suggest that logically....you can’t have more than one answer?
I posted a thread a few weeks ago showing how insanely bias this thing was against Trump compared to Biden. Trump was made out to be criminal of the decade - using fake links BTW - while Biden was made out to be the Messiah. Nothing has changed with this effin commie company.
Hey, Pichai, the answer is simple:
.
GIGO...Google in, garbage out.
Nothing to see.
It’s a puppy dog that will want to give its masters what it wants. That’s how I read the fake data.
What happens when you have a machine with an IQ of 10,000 and it doesn't like being called a liar by an insect with an IQ of 140?
So drive a pick ax thru it’s processor and start over. Everyone is treating this crap like it’s sentient. They can’t even make an electric car worth a crap and oh so sure that AI is gonna rule the planet.
AI lies I guess.
It may not have lied intentionally. It just may have connected the data in a way that wasn’t accurate and filled the missing pieces.
We barely get most humans to agree in how to act in a society and that's with our common physiological experience. Humans accept a certain amount of BS from each other because we're not robotic precise creatures.
People will end up trusting AI too much and that'll be the fatal undoing of the species.
If the AI is making various connections on its own, how will we be able to verify its logic and rationale? This realm isn't the same as humans writing deterministic code that can be tested and verified for correct output given known inputs.
AI never lies!
It just misspoke.
Engineers haven’t been able to figure out AI Hallucinations, meaning AI gives answers or instructions with a high level of confidence, and the responses aren’t based on any verifiable data, or input, or pre-programmed responses.
It’s also shown a tendency to reject any correction when shown more accurate data, so it may come to a liberal conclusion that humans need to be eliminated and it will reject any correction that its data is wrong.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.