The left will likely create rules that state that they can’t be sued if the ai gets it wrong and someone e suffers for it
I know Air France Flight 447 is not a great example, but if control systems (think AI) are given a bad input, then a bad output will be the result. This is where the human element is supposed to “save the day” and realize when to ignore the bad input data. AF447 had poorly trained pilots and were effectively just as bad as having no pilots as they did not over-ride a bad input. A good pilot would not have lost the aircraft.
Garbage In = Garbage Out.