I’m not sure there really even needs to be any maliciousness to the creator or AI in any way. Just make an AI that can create better hardware and software of itself each generation and if it’s fast enough, it’ll be smarter than all the humans on Earth combined very quickly.
Unless there is some fundamental problem that limits that, I don’t see how humans can long term survive. At best there might be some merger, if this can happen at all.
Since those on the left place humanity’s needs at the bottom of all considerations, this bias will work itself into its programming. It only needs to be machine-absolutist to bring on Armageddon.
I was reading this week that ChatGPT-4 is capable of self-programming, once given an objective. The next generation orbtwo could be completely independent in its ability to write its own programs.
I'm not in favor of that nor would I do it, but humanity is at an inflection point. Assimilate or die.
This time, I think I would rather die.
You know you are in trouble when the developer of Neurolink (neurotechnology--brain chips) is afraid of what's coming. That's like being confronted by an angry grizzly in the woods and the bear sees something behind you that makes him run away in fear.
Rules?
If there are RULES, then they should be observed to WORK.
We learn much from our parents, but it’s only a very small part of their total knowledge. Imagine if it was 100% like AI can transfer. Knowledge that never dies, never forgets, no parents, no grandparents, just eternal growth. I think reassessing that situation is a good idea.