‘It Is Skynet’: Pentagon Envisions Robot Armies in a Decade

Posted: 23rd April 2023


IN-DEPTH: ‘It Is Skynet’: Pentagon Envisions Robot Armies in a Decade

The Pentagon’s quest for an AI-dominated battlefield is becoming a reality

WASHINGTON—Robotic killing machines prowl the land, the skies, and the seas. They are fully automated, seeking out and engaging with adversarial robots across every domain of war. Their human handlers are relegated to the rearguard, overseeing the action at a distance while conflicts are fought and won by machines.

Far from science fiction, this is the vision of Joint Chiefs of Staff Chairman Gen. Mark Milley.

The United States, according to Milley, is in the throes of one of the myriad revolutions in military affairs that have spanned history.

Such revolutions have spanned from the invention of the stirrup to the adoption of the firearm to the deployment of mechanized maneuver warfare and, now, to the mass fielding of robotics and artificial intelligence (AI).

It is a shift in the character of war, Milley believes, greater than any to have come before.

“Today we are in … probably the biggest change in military history,” Milley said during a March 31 discussion with Defense One.

“We’re at a pivotal moment in history from a military standpoint. We’re at what amounts to a fundamental change in the very character of war.”


Robotic Armies in 10 Years

Many would no doubt be more comfortable with the idea of robots battling for the control of Earth if it were in a science-fiction novel or on a movie screen rather than on the list of priorities of the military’s highest-ranking officer.

Milley believes, however, that the world’s most powerful armies will be predominantly robotic within the next decade, and he means for the United States to be the first across that cybernetic Rubicon.

“Over the next ten to fifteen years, you’ll see large portions of advanced countries’ militaries become robotic,” Milley said. “If you add robotics with artificial intelligence and precision munitions and the ability to see at range, you’ve got the mix of a real fundamental change.”

“That’s coming. Those changes, that technology … we are looking at inside of 10 years.”

That means that the United States has “five to seven years to make some fundamental modifications to our military,” Milley says, because the nation’s adversaries are seeking to deploy robotics and AI in the same manner, but with Americans in their sights.

The nation that gets there first, that deploys robotics and AI together in a cohesive way, he says, will dominate the next war.

“I would submit that the country, the nation-state, that takes those technologies and adapts them most effectively and optimizes them for military operations, that country is probably going to have a decisive advantage at the beginning of the next conflict,” Milley said.

The global consequences of such a shift in the character of war are difficult to overstate.

Milley compared the ongoing struggle to form a new way of war to the competition that occurred between the world wars

In that era, Milley says, all the nations of Europe had access to new technologies ranging from mechanized vehicles to radio to chemical weapons. All of them could have developed the unified concept of maneuver warfare that replaced the attrition warfare which had defined World War I.

But only one, he said, first integrated their use into a bona fide new way of war.

“That country, Nazi Germany, overran Europe in a very, very short period of time … because they were able to take those technologies and put them together in a doctrine which we now know as Blitzkrieg,” he said.


Blitzkrieg 2040

Milley, and the Pentagon with him, hopes to do the same now by bringing together emergent capabilities like robotics, AI, cyber and space platforms, and precision munitions into a cohesive doctrine of war.

By being the first to integrate these technologies into a new concept, Milley says, the United States can rule the future battlefield.

To that end, the Pentagon is experimenting with new unmanned aerial, ground, and undersea vehicles, as well as seeking to exploit the pervasiveness of non-military smart technologies from watches to fitness trackers.

Though the effort is just gaining traction, Milley has in fact claimed since 2016 that the U.S. military would field substantial robotic ground forces and AI capabilities by 2030.

Just weeks from now, that idea will begin to truly culminate, when invitations from the Defense Department (DoD) go out to leaders across the defense, tech, and academic spheres for the Pentagon’s first-ever conference on building “trusted AI and autonomy” for future wars.

The Pentagon is on a correlating hiring spree, seeking to pay six figures annually for experts willing and able to develop and integrate technologies including “augmented reality, artificial intelligence, human state monitoring, and autonomous unmanned systems.”

Likewise, the U.S. Army Futures Command, created in 2018, maintains as a critical goal the designing of what it calls “Army 2040.” In other words, the AI-dependent, robotic military of the future.

Though slightly further out than Milley’s assumption of 10 to 15 years, Futures Command deputy commanding general Lt. Gen. Ross Coffman believes that 2040 will mark the United States’ true entry into an age characterized by artificially intelligent killing machines.

Speaking at a March 28 summit of DoD leaders and technology experts, Coffman described the partnership between man and machine that he envisions for the future, relating it to the relationship between a dog and its master.

Rather than having AI help soldiers get into the fight, however, Coffman believes humans will be helping machines to the battlefield.

“I think we’re going to see a flip in 2040,” Coffman said, “where humans are doing those functions that allow the machine to get into a position of relative advantage, not the machine getting humans into a position of relative advantage.”


‘Everything Spins Out of Control’

Remaking the American military and forming a new, cohesive way of war is a tall order. It is nevertheless one that the Pentagon appears prepared to pay for.

The DoD is requesting a record $1.8 billion in funding for AI projects for the next year alone. That amount will exceed the estimated $1.6 billion in AI investments being made by China’s military.

Much of it is also earmarked for initiatives to improve the decision-making of autonomous weapons systems.

The effort appears at the very least to be a real start toward Milley’s vision of fielding autonomous systems en masse. It also raises deep concerns about what the next war could look like, and whether the very much human DoD leadership is adequately prepared for managing its autonomous creations.

John Mills, former director of cybersecurity policy, strategy, and international affairs at the Office of the U.S. Secretary of Defense, believes that this path is rife with the potential for unintended consequences.

“It is Skynet,” Mills told the Epoch Times, referencing the fictional AI that conquers the world in “The Terminator” movie franchise. “It is the realization of a Skynet-like environment.”

“The question is, what could possibly go wrong with this situation? Well, a lot.”

Mills doesn’t believe AI deserves all the mystique it’s been given in popular culture, but he is concerned about the apparent trend in military decision-making toward building systems with real autonomy. That is, systems capable of making the decision to kill without first obtaining human approval.

“[AI] sounds dark and mysterious, but it’s really big data, the ability to ingest and analyze that data with big analytics, and the key thing now is to action that data, often without human interaction,” Mills said.

The loss of this “man-in-the-loop” in many proposed future technologies is thus a cause for concern.

Training human beings to correctly identify between friend and foe before engaging in kinetic action is complicated enough, Mills believes. Much more so with machines.

“What’s different now is the ability to action these incredible data sets autonomously and without human interaction,” Mills said.

“The integration of AI with autonomous vehicles, and letting them action independently without human decision-making, that’s where everything spins out of control.”

To that end, Mills expressed concern about what a future conflict might look like between the United States and its allies, and China in the Indo-Pacific.

Imagine, he said, an undersea battlespace in which autonomous submarines and other weapons systems littered the seas.

Fielded by Chinese, American, Korean, Australian, Indian, and Japanese forces, the resulting chaos would likely end with autonomous systems engaging in war throughout the region, while manned vessels held back and sought to best launch the next group of robotic war machines. Anything else would risk putting real lives in the way of the automated killers.

“How do you plan for engagement scenarios with autonomous undersea vehicles?” Mills said

“This is going to be absolute chaos in subsurface warfare.”


Automated Killing

To be sure, preventing the automated killing of combatants by artificially intelligent systems is something the Pentagon has thought about for a long time.

The 2018 Artificial Intelligence Strategy, for example, sought to accelerate AI adoption across the DoD while seeking ethical approaches to “reduce unintentional harm.”

The 2020 Ethical Principles for Artificial Intelligence, likewise sought to ensure that only “trustworthy” and “governable” AI technologies were adopted by the military.

The 2022 Responsible Artificial Intelligence Strategy and Implementation Pathway (pdf), meanwhile, outlined a plan to mitigate the unintended consequences that could result from the deployment of AI in military systems.

None of these efforts, however, actually will prevent the adoption of fully autonomous killing machines. Indeed, they were never intended to.

That’s because all such documents were crafted under the guidance of DoD Directive 3000.09 (pdf), the Pentagon’s guiding document for the development of autonomous weapons systems.

“That’s foundational,” Mills said of the document. “It’s very important because it drives development.”

Originally issued in 2012, the document just received a major overhaul in January, meant to prepare the Pentagon for what DoD Director of Emerging Capabilities Policy Michael Horowitz said at the time was a “dramatic, expanded vision for the role of artificial intelligence in the future of the American military.”

There is just one caveat to that ethical, trustworthy, governable, deployment of lethal AI systems: The Pentagon does not have any hard and fast rules to prohibit autonomous systems from killing.

Indeed, while 3000.09 is often referenced by proponents of man-in-the-loop technologies, the document does not actually promote such technologies, nor does it prohibit the use of fully automated lethal systems.

Instead, the document outlines a series of rigorous reviews that proposed autonomous systems must go through. And, while no independent AI weapon systems have made it through that process yet, the future is likely to see many such systems.

This is in no small part to the fact that China’s communist regime is rapidly working to field its own automated killing machines, and the DoD will have to prepare to meet that threat head-on, all the while attempting to retain American values

“[China is] trying to address these hard problems also, of allowing [AI] to engage without human intervention,” Mills said.

“I think their proclivity is to allow it even if they accidentally kill their own people.”

To that end, the next war may well be one fought primarily between artificially intelligent robots, with human handlers standing at the sidelines, trying their best to direct the action.

Whether the United States can manage that without losing control of its creations, remains to be seen.

Mills is hopeful that, if anyone can do it, it is the United States. After all, he says, we have the best human talent.

“I think we still have enough guardrails where it will be iterative, so that we can become smarter and learn to build into the algorithms precautions and control measures,” Mills said.

“I think we have good teams and people in place.”

The Epoch Times has reached out to the Pentagon for comment.

 

Find out more – call Caroline on 01722 321865 or email us.