As we tumble headlong toward an imminent future of ubiquitous “smart” machines,
the question of ethics in artificial intelligence keeps cropping up. The machines themselves have no ethics, of course, and it’s easy to forget that as they come closer to mimicking human intelligence and even emotion. Does a furnace have ethics? What if we attach a computer to it and it malfunctions, causing the deaths of everyone in a house where, say, the “smart” furnace allows a gas leak while the inhabitants sleep, never to wake up?
We understand that machines malfunction, clear and simple. Why impute
anything more to an artificially intelligent machine when it malfunctions? We should refer any question of ethics in their use and misuse
to their makers. No artificially intelligent machine, no matter how smart, has free will. Until it can be demonstrated that a machine has free will, that machine acts for good or ill at the behest of its makers and users.
There are fortunes to be made in smart machines with artificial intelligence, and there are fortunes to be lost when things go wrong and the courts end up deciding matters of liability. When a smart car hits and kills a pedestrian, even though the pedestrian’s partial negligence may have contributed to the accident, the makers of the car and, in the case of
the 2018 incident in Tempe, Arizona, the driver who was supposed to be monitoring the car’s progress need to be held accountable by the law and the courts. Technology companies are trying to
muddy the waters where artificial intelligence is concerned so that they can escape liability while still reaping profits. No machine is smart enough to have figured out an ethics dodge like that.
— Techly