This is already happening. AIs give 'problematic' results when they analyze crime data or hiring data, so leftists go nuts and tweak they AI to give an approved answer.
There is no way that we will not have psychotic AIs that give nonsensical harmful conclusions in this kind of environment.
At this point, AIs are not true intelligences, so that cannot become murderous like HAL when given conflicting orders by the same type of psychopaths who are tweaking our existing AIs, but they can give nonsensical responses to input data and cause unexpected or unintended failures (think of a tweaked AI controlling life support systems or even financial systems where tweaked weights and biases in its settings cause it to go in unexpected directions due to inputs it hasn't seen before).