A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout.
The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of an AI that can recognize intent, judge whether it's ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire.
That sounds like a lofty goal, in fact the researchers themselves refer to it as a "blue sky" idea, but the technology to make it possible is already here.
According to the team's research:
Predictably, some will object as follows: "The concept you introduce is attractive. But unfortunately it's nothing more than a dream; actually, nothing more than a pipe dream. Is this AI really feasible, science- and engineering-wise?" We answer in the affirmative, confidently.
The research goes on to explain how recent breakthroughs involving long-term studies have lead to the development of various AI-powered reasoning systems that could serve to trivialize and implement a fairly simple ethical judgment system for firearms. .....
It's difficult to take this seriously, but even if trying to do so it is beyond the realms of practicality as well as posing a dangerous potential for some sort of 'one-size-fits all' system. It has to be wondered just how any artificial 'intelligence' would ever be reliably able to differentiate between a defensive use of a firearm and a criminal use - might as well forget about relying on your weapon to save your life when in extremis!