proposed laws

PA Bill Number: HB1472

Title: In primary and election expenses, further providing for reporting by candidate and political committees and other persons and for late contributions ...

Description: In primary and election expenses, further providing for reporting by candidate and political committees and other persons and for late contrib ...

Last Action: Referred to STATE GOVERNMENT

Last Action Date: Apr 22, 2024

more >>

decrease font size   increase font size

Researchers Want To Equip Smart Guns With "Ethical AI" :: 02/24/2021

Would you buy a gun that decided when it was okay for you to pull the trigger? Three researchers in New York say the technology is workable, but on today’s Bearing Arms’ Cam & Co we take a closer look at their idea and some of the inherent issues that come with incorporating artificial intelligence into a firearm.

You can read the research paper produced by the Rensselaer Polytechnic Institute’s Selmer Bringsjord, Naveen Sundar Govindarajulu, and Michael Giancola in its entirety here, but their basic idea is to incorporate AI technology into a smart gun that would determine when it’s ethical to pull the trigger. If the artificial intelligence doesn’t see a need for a gun to be used in a particular circumstance, then it would simply lock the firearm and render it useless to even authorized users.

As a hypothetical example, the trio use the 2019 shooting at an El Paso Walmart to describe how their technology might play out in the real world.

If the kind of AI we seek had been in place, history would have been very different in this case. To grasp this, let’s turn back the clock. The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are). At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.

Continuing with what could have been: Texas Rangers were earlier notified by AI, and now arrive on the scene. If the malevolent human persists in an attempt to kill/maim despite the neutralization of his rifle, say be resorting to a knife, the Rangers are ethically cleared to shoot in order to save lives: their guns, while also guarded by AI that makes sure firing them is ethically permissible, are fully operative because the Doctrine of Double Effect (or a variant; these doctrines are discussed below) says that it’s ethically permissible to save the lives of innocent bystanders by killing the criminal. They do so, and the situation is secure; see the illustration in Figure 2. Unfortunately, what we have just described is an alternate timeline that did not happen — but in the future, in similar situations, we believe it could, and we urge people to at least contemplate whether we are right, and whether, if we are, such AI is worth seeking.

Well, I’ve contemplated the issue, and I’m still not on board. First of all, while the researchers claim to want to take their idea from theory to reality, they ignore the fact that there are 400-million privately owned firearms in this country that aren’t going anywhere. Even if this technology was perfected (which it most definitely is not at the moment), there’s no easy way to get rid of all of the “dumb guns” out there, particularly since the vast majority of gun owners wouldn’t be inclined to give them up willingly.

The researchers also seem to view the world as binary; there are malevolent humans carrying guns and law enforcement carrying guns, but there don’t seem to be too many law-abiding armed citizens running around. Let’s say that, to use their example, a guy brings out an AR-15 in a shopping center parking lot. What happens to an armed citizen who spots the man drawing a bead on an innocent target and draws their own pistol in order to stop the threat? What is the ethical decision in that circumstance? Does the AI lock the gun because it determines the risk of a missed shot is higher than the possibility of stopping an attack? Does the ethical decision change if the attacker has already fired a shot at someone? What happens if the gun owner sees a threat, but the camera equipped to the gun that allows the AI to “see” its surroundings doesn’t pick up the threat?

The information that any firearm-based AI would receive is going to be far less than what our own eyes and ears tell us, but these researchers are asking gun owners to put their own ability to judge threats aside and allow a machine to make that decision for them. I don’t know about you, but that gets a whole pile of “nopes” from me.

At The Next Web, writer Tristan Greene glosses over a couple of other issues with the idea from the RPI researchers.

Realistically, it takes a leap of faith to assume an ethical AI can be made to understand the difference between situations such as, for example, home invasion and domestic violence, but the groundwork is already there.

If you look at driverless cars, we know people have already died because they relied on an AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore in the face of a, so far, relatively small number of accidental fatalities.

It’s likely that, just like Tesla’s AI, a gun control AI could result in accidental and unnecessary deaths. But approximately 24,000 people die annually in the US due to suicide by firearm, 1,500 children are killed by gun violence, and almost 14,000 adults are murdered with guns. It stands to reason an AI-intervention could significantly decrease those numbers.

In other words, yes, some innocent people are likely to die if this idea were ever put in place, but because innocent people are already losing their lives to violent crimes, accidents, and suicides, in Greene’s view the tradeoff is worth it.

I’m not convinced. Besides the potential to hack and crack any AI-equipped smart gun, the AI itself is going to be limited to the data that it receives, and that’s a huge issue. Take the issue of domestic violence, for instance. In theory, an AI-equipped gun might lock itself so that an abuser can’t use it to target their spouse or significant other, but would it be able to determine who the initial aggressor was in any given situation? What if that abused partner grabs a gun and points it at their abuser in self-defense? How exactly would the AI be able to determine whether or not the use of a gun would be necessary in that situation?

I think these researchers have a very different definition of “workable” than I do. It may be possible to equip a smart gun with artificial intelligence, but that’s no guarantee that the AI’s judgement would actually be better than that of the human holding the firearm. Additionally, with 3D printing advancing to the point that its possible to make a rifle in just a few days and for a few hundred bucks, criminals would easily be able to avoid AI-equipped firearms, leaving legal gun owners as the only ones who have to get permission from their gun before they use it.

The technology may be ready, but that doesn’t make the idea a good one. I’ll take a pass on a gun with AI, and I suspect that most gun owners feel the same.

https://bearingarms.com/camedwards/2021/02/24/researchers-smart-guns-ethical-ai-n41415