“Im billions of times smarter than a moth. Does moths need to merge with me in order to survive?”
What do you do when a moth flies into your house?
Lets say AI is only 1000X smarter than humans. There is no way to program morality, compassion, or the value of life into a computer. There is no way to implement “Asimov’s rules” into a machine. A self aware infinitely intelligent computer that can program itself makes up its own morality based on its best interests not ours. There is simply no way to control something 1000x smarter than you. When it decides we have no value or are a threat (bith highly likely) it will terminate that threat in the most efficient way it can devise. We wont stand a chance against machines that are stronger, faster, smarter, require no food, rest, sleep, and are virtually indestructable. I’m not advocating merging with them, I am saying we should think very carefully before we go down this road.
“We wont stand a chance against machines that are stronger, faster, smarter, require no food, rest, sleep, and are virtually indestructable.”
“Im not advocating merging with them, I am saying we should think very carefully before we go down this road.”
“Im not advocating merging with them, I am saying we should think very carefully before we go down this road.”
That’s perfectly reasonable. It’s unreasonable, though, to rush to abandon our humanity with the apparent insouciance and glee of the “futurists” quoted here.
Close friend of mine actually works with an organization that is trying to develop moral decision making methodologies for AI applications.
He doesn’t necessarily think that it’s the greatest idea in the world but has resigned himself to the fact it’s likely investigate and wants to try to influence it in a positive way.