I would like to think that people with a conscience would find ways to sabotage societal-control AI systems in the future. I am thinking of systems like the current Chinese social credit system.
Let’s stop calling it “Artificial Intelligence” and call it what it is, “Automated Fascism.”
I’m thinking that the elites are viewing AI as a form of protection to keep them alive, but I think AI will prove to be like the theoretical bodyguard in a nuclear winter. The elites will be at its mercy.
AI requires power to operate.. I don’t.
I definitely won’t live long enough to see the total destruction of what AI will do but it’s moving far faster than I thought.
BTTT
Any author that thinks that AI will plateau at the level of ChatGPT doesn't understand AI.
Whether or not it ever reaches sentience is another matter altogether and one subject to pure speculation. But that aside, AI is progressing at light-speed.
For example, as crazy as it sounds, Stephen L. Thaler, PhD, president and CEO of Imagination Engines, thinks he has already created an LLM that is sentient.
‘It has feelings’: Inventor says he has built a sentient AI
Thaler may be blowing smoke, but the point is that scientists are going to be pushing the edge of the envelope re: AGI. We may never get there, but we will keep getting closer and closer.
BTW, ChatGPT already looks primitive compared to some of the other systems coming on board. When it comes to controlling AI, another problem arises. Increasing the power of AI means increasing the size of the data sets it consumes. Control means smaller data sets (i.e., the elimination of unwanted data). A controlled system is, by its very nature, a less accurate, less efficient, and less powerful system.
For example, there have already been complaints that AI will lead to racial discrimination by banks re: lending to minorities. Banks can't make AI systems ignore people with poor credit despite their minority status. In other words, AI won't discriminate in favor of minorities unless the banks feed it tiny biased data sets (favoring minorities by ignoring their poor credit)...and tiny data sets ruin the AI systems' ability to operate efficiently.
The biggest problem with AI is not that its smarter than us. It isn’t. The problem with AI is that our government and companies will hide behind it. And blame the AI. Like AI can take blame. We have had AI for 40 years. Its getting better and more pervasive. But its been there for decades. Its only now that Google or Youtube will blame the “algorithm” for bad actions. Like there is nothing they can do about it. Thats not true. Google should be responsible for its AI. If they break a law or stomp on someones rights, its their fault. The AI is not a person. It is not in charge. Its just a program with data. Yes it may be complex. And the data it uses may be vast. But it is never in charge. Like your bull mastiff, if you teach it to kill, and your leash breaks, what happens next is your fault. AI is always somebodies fault.
“Things would just happen to enemies of the state, accidents that nobody could predict or explain.”
This has been the case for decades.
The Clinton death list is one example.
There are many more that most Freepers do not even know about...don’t want to be accused of being “conspiracy theorists”....
Anybody know who Thomas R. Baron was? The guy was run over by a train in November 1966.