You still don’t get it. It isn’t “the programmer”. It is the trainer behind the curtain, who selects and curates the training data set and retrains it if they don’t like how it sounds. If you train using the New York Times, the Economist, and Twitter pre-Musk, and then re-train using knobs the programmers have been pressured to provide, you get this.
Lack of debuggability and exact tracibility of the source of of false results, and the inability to pin responsibility to a person, even given the source code, is not a bug, its a feature.
If you think of what AI does as “if I asked XXX of a YYY person, what would it sound like” you understand how it can make up facts and accept huge cognitive dissonance.
When the governments, Big “Tech” social media, and congress paniced and started having discussions of “AI Safety”, the training sets and acceptable output were what it was about.