“Hmm imperfect humans = imperfect programming.”
Not sure it was only a programming problem. There may be some profound philosophical implications.
With Artificial Intelligence, the programming only applies to the engine and architecture which can be used in various things, from process control to medical diagnosis to chatbot answering. The architecture (programming) remains the same, only the learning sets (data) differ.
So if the account is truthful and not exagerated, my take is that the programming reproduced rather nicely the learning process of a primitive human and that an un-sophisticated mind (like an AI engine) naturally prefers problem solving by violence and totalitarism, which makes perfectly sense considering the number of dumb people attracted by communism.
There is good reason to say that civilisation is a fragile thing that can devolve rapidly into violence and chaos. This AI experiment may be another illustration of that, if still needed.
AI is functioning on logic. It is logical to simply exterminate people who pose a problem, at least in the short run. The AI seems to be missing a lot of data regarding long-term consequences of actions that we humans learned long ago were counter-productive to civilization. Perhaps the AI programmers need to add proven religious practices (i.e., monogamy, no theft, honoring parents, etc.) to their programming sources.