Armed Robotic Systems A.K.A. “Killer Robots” [sic]

[Mark Safranski/ “zen”]

Dr. Robert Bunker emailed to alert me that the Strategic Studies Institute has released his monograph Armed Robotic Systems Emergence: Weapons Systems Life Cycles Analysis and New Strategic Realities. From the synopsis:

Armed robotic systems—drones and droids—now emerging on the battlefield portend new strategic realities not only for U.S. forces but also for our allies and future potential belligerents. Numerous questions of immediate warfighting importance come to mind with the fielding of these drones and droids that are viewed as still being in their experimental and entrepreneurial stage of development. By drawing upon historical weapons systems life cycles case studies, focusing on the early 9th through the mid-16th-century knight, the mid-19th through the later 20th-century battleship, and the early 20th through the early 21st-century tank, the monograph provides military historical context related to their emergence, and better allows both for questions related to warfighting to be addressed, and policy recommendations related to them to be initially provided.

Bunker correctly explains the degree to which this topic has already been overhyped and that Ai that could operate even at the level of “a trained animal” is at best a prospect for the near term future. To use an aerial analogy, autonomous combat droids today are not in the era of the fragile WWI biplane but really something closer to Orville and Wilbur Wright’s bicycle shop before Kitty Hawk. Bunker’s use of a historical, evolutionary framework for armed robotics is apt.

Nevertheless, the subject continues to captivate the media and our think tanks. Here for purposes of comparison was the 2014 CNAS report Prepare for Robotic Warfare by Robert Work, later Deputy Secretary of Defense under Presidents Obama and Trump, and CNAS VP Shawn Brimley. There are other similar studies to be found online. Driving this is the logical inevitability (which tech is far from catching up to) that robotic warfare systems, if done to economies of scale, would be effective force multipliers, especially for smaller powers or deep-pocketed private entities and insurgent groups.

Someday.

  1. Andy:

    Zen,

    Have you seen the slaughterbots video yet? It’s a production from the Future of Life Institute.

    https://youtu.be/HipTO_7mUOw

  2. zen:

    That was good. What would a reality look like if that was common? Changed architecture, netting, electronic jamming zones, quasi-subterranean complexes

  3. Jim Gant:

    Zen,

    Hope all is well…read ‘Superintelligence’ By Nick Bostrom not long ago. There would obviously be a direct tie-in between armed robotic systems, drones, satellites, direct access to intelligence (as in military intelligence and intelligence from the CIA and others), lethal weapons systems and artificial intelligence. That makes for a pretty unstable platform…but one that is coming for sure. Someday.

    Throw in a highly trained person with little or no command or control restraints coupled with the right approvals and authorities – you will have that ‘Super Empowered Individual’ we have talked about before. Someday.

    Always a joy to read your stuff!

    Take care,

    Jim

  4. zen:

    Hi Jim,
    .
    On your rec I gave this PDF a quick read. It seems to be the nucleus piece from which Mr. Bostrom developed a book.
    .
    https://nickbostrom.com/views/superintelligence.pdf
    .
    I’m not versed in this subject so I tend to listen carefully to guys like Adam Elkus who is doing PhD work on Ai related theories but here’s a thought that may alter any forecast:
    .
    The best use of a next generation, order of magnitude qualitative improvement in computing, whether Ai or quantum computing, would be to immediately put the prototypes into an array to work on currently intractable problems, including Ai structure/capacity/architecture problems. Having leapfrogged over any competitor with the prototype the first state or entity with the computing breakthrough would then try to parlay that into broad strategic advantages by accumulating other breakthroughs in a string of successes. While this would likely happen in a variety of unrelated fields, the breakthrough after the breakthrough in Ai is probably not easily predictable (much in the same way that long term tech predictions over 15 or so years are notorious for being incorrect/oblivious/overoptimistic. things may go in an entirely unforeseen and unforeseeable direction akin to the leap from Newtonian physics to Relativity and Quantum Mechanics at the turn of the 20 C. And this is something that could leave the US screwed if other powers make the leap first. Might not be an easily recoverable loss.

  5. zen:

    Jim,
    .
    Perhaps the implications of superintelligence will follow this established pattern:
    .
    http://necsi.edu/projects/yaneer/Civilization.html