Did you know The UN is Afraid of Killer Robots, Here’s Why
Thanks to the rapid advances made in the field of AI, autonomous weapons
systems, or killer robots in colloquial terms, might soon become a
reality. As a result of the fact that this is the case, the UN has
adopted a resolution to make these systems less effective than might
have been the case otherwise. These types of weapons can acquire targets
without any human involvement whatsoever, which makes them an
especially dangerous outcome of the current AI race.
With all of that having been said and now out of the way, it is important to note that Harvard law lecturer
Bonnie Docherty recently spoke out about this issue. She described
autonomous weapon systems as systems that rely on sensor inputs to
determine targets rather than human input. It turns out that they have
actually been used multiple times in the past, although they are not
quite as sophisticated as they would eventually end up becoming.
Systems used during the ethnic conflict in the Nagorno-Karabakh region
were able to identify targets all on their own. The same can be said of
the systems deployed in the Libya conflict, with some referring to them
as loitering munitions. These weapons can hover over the field of battle
and deploy their payloads as soon as an enemy target is detected, even
if a human didn’t order the strike.
Needless to say, autonomous
weapon systems come with a whole host of ethical concerns with all
things having been considered and taken into account. It can reduce the
taking of human life to a matter of numbers and data, which many
consider to be crossing a line.
Algorithmic bias is also
essential to consider because of the fact that this is the sort of thing
that could potentially end up discriminating against people based on
their ethnicity, gender and other aspects. Even disabled individuals
could end up being targeted, with the AI based targeting systems unable
to discern human rights in the appropriate circumstances.
Apart from ethical considerations, legal concerns have also arisen.
Machines might not be able to differentiate between military combatants
and humans that are present on the battlefield in a civilian capacity.
Human judgement is essential in this regard due to the reason that
weighing civilian casualties against military outcomes.
This
involves something called the proportionality test, wherein someone or
the other determines whether or not civilian loss of life justifies
military action. For all of its advancement, AI can’t yet be programmed
to display human judgement.
This raises another important
question that must be asked. If the AI can’t show judgement, how can it
be held accountable for any potential atrocities or crimes against
humanity? At the same time, the operator of the system can’t be held
accountable either, since they’re not technically the one that ordered
the attack.
So far, any attempts to ban autonomous weapon systems
have met stiff resistance from countries like Russia. Even the US and
the UK have proposed non-binding resolutions in order to leave the door
open for future use of these systems should the need arise. Indeed, a
number of countries prefer non-binding resolutions, with each of them
coincidentally developing autonomous weapon systems of their own.
As
it currently stands, the UN is trying to collect civil opinions on the
matter at hand. 164 member states voted in favor of this resolution, and
it will be interesting to see where things go from here on out.
According to the UN Secretary General, a new treaty might be coming as
early as 2026. If it fails to reach the required number of voters, the
potential loss of life might be staggering. Unlike landmines and other
munitions, these aren’t tried and tested weapons yet, which might make
obtaining a vote harder in the long run.
m