Oh come on. He doesn’t advocate the destruction, he’s really just talking about the same general idea that all the big sci-fi authors grappled with: that creating intelligent machines stronger, and possibly smarter than us, could lead to our own extinction.
Isaac Asimov came up with his “Robotic Laws” to deal with this kind of problem, postulating that intelligent machines would need to be “hard-coded” to never cause harm to humans, otherwise building them would be too risky.
I’ve read a good deal about the possibility of this type “singularity” and it is a bit unnerving. Who’s to say that Asimov’s 3 laws will always be incorporated into leading edge robotics and computer technology. If the singularity is real there is always the chance that the 3 laws are ignored for the sake of getting to market first, etc. and thus, there is that danger. I doubt it will happen in my lifetime but we are hurtling headlong in that direction.
Avoiding his own words and instead reading his mind to make it fit and what others did?
And how’d that work out? They made themselves Law Zero.