Russian Weapons Manufacturer Builds Fully-Automated “Killer Robot”


Back in 1997, a movie that delved into a very controversial subject matter, raised the question of whether new technologies that could pinpoint specific genetic tendencies for an unborn baby and predict its potential path of success or defeat, would be morally reprehensible and therefore illegal.  The movie was called “Gattaca” and followed the story of the future United States where scientists could literally identify specific medical problems and give parents a chance to “fix” whether or not they wanted that child.  The technology evolved to the point where they could tell what the child would look like, what gender it would be, and eventually, ways to manipulate such information so that the child’s inadequacies could be “erased,” thereby creating superior children, perfect in every way.

This concept was so disturbing that Congress in 2008 was moved to ban discrimination based on genetic information, called the Genetic Information Nondiscrimination Act (GINA).

As genetics has been viewed as a moral issue, so too is another technology that has been emerging as more of a viable alternative to human-involved warfare:  the use of lethal robotic weaponry, that which acts based on its own programming ability and without the “interference” of human control.

Kalashnikov, the maker of the iconic AK-47, is one of those manufacturers bringing lethal automation and robotics into the present-day as it is currently building a range of products based on “neural networks…”

Defense One is reporting:

The maker of the famous AK-47 rifle is building “a range of products based on neural networks,” including a “fully automated combat module” that can identify and shoot at its targets. That’s what Kalashnikov spokeswoman Sofiya Ivanova told TASS, a Russian government information agency last week. It’s the latest illustration of how the U.S. and Russia differ as they develop artificial intelligence and robotics for warfare.

The Kalashnikov “combat module” will consist of a gun connected to a console that constantly crunches image data “to identify targets and make decisions,” Ivanova told TASS. A Kalashnikov photo that ran with the TASS piece showed a turret-mounted weapon that appeared to fire rounds of 25mm…”

First off, props to whoever at Kalashnikov decided to hire a spokeswoman for robotic lethal weaponry named Sofiya Ivanova…sorry, but that is a name right out of a sci-fi spy novel.  That aside, this so-called “combat module” I assume will resemble a tank or Tactical Operations Center (TOC) unit and will have guns mounted on it that are linked to a neural network of information that will allow it to “make decisions” based on its database library and its visually-observed data, all free of human control.  In other words, a killer robot programmed to destroy the enemy.

Defense One points out that in 2012 then-Deputy Defense Secretary Ash Carter signed a directive forbidding the U.S. to allow any robot or machine to take lethal action without the supervision of a human operator.

Then in 2015, then-Deputy Defense Secretary Bob Work said fully automated killing machines were un-American.

“I will make a hypothesis: that authoritarian regimes who believe people are weaknesses,” Work said, “that they cannot be trusted, they will naturally gravitate toward totally automated solutions. Why do I know that? Because that is exactly the way the Soviets conceived of their reconnaissance strike complex. It was going to be completely automated. We believe that the advantage we have as we start this competition is our people.”

According to Sergey Denisentsev, a visiting fellow at the Center For Strategic International Studies, Russian weapons makers see robotics and the artificial intelligence driving them as key to future sales to war makers.

“There is a need to look for new market niches such as electronic warfare systems, small submarines, and robots, but that will require strong promotional effort because a new technology sometimes finds it hard to find a buyer and to convince the buyer that he really needs it, ” Denisentsev said earlier this year.

As you can see, there is a vast difference in the views of the American and Russian officials in so far as robotic and AI weaponry is concerned.  At least, that’s the way it appears.  While the public face of aggressively opposing robotic weaponry is front and center on the American front, the chances that this is the majority opinion in the military industrial complex are quite remote.

For decades, Hollywood has enjoyed depicting American military commanders as bellicose warmongers who view robotic weaponry (and anything else that can cause mass extinction with minimal effort) as the apex of modern-day warfare.  To achieve abilities as these only dreamt about in sci-fi novels would be the equivalent of a Leftist’s dreams of American capitalism being overthrown by socialism.  But it is quite evident from many different sources that American military officials would welcome weaponry of this sort.

Recently there has been some debate at the U.N. about “killer robots,” with prominent scientists, researchers, and Human rights organizations all warning that this type of technology – lethal tech. that divorces the need for human control – could cause a slew of unintended consequence to the detriment of humanity.

A study conducted the University of British Columbia shows that this type of terminator-like weaponry isn’t sitting well with the general public, as an overwhelming majority of people, regardless of country or culture, want a complete ban placed upon any further development of these autonomous systems of war.

This is all well and good, this talk of what many people don’t want to see or support in the realm of robotic weaponry, but the simple fact of the matter is that much of this is already underway. There’s no doubt that quite a few of the industrialized nations on the Earth who have a substantial military budget are delving into this arm of military hardware. America is, no doubt, among the top few that are pouring millions if not billions into this field of military science. To say that we couldn’t, to paraphrase Dr. Ian Malcolm, once again, is assuming that these officials “took the time to find out if we shouldn’t.”

Chances are, they didn’t.

Source: ZeroHedge



Share

6 Comments

Leave a Reply

Pin It on Pinterest