zenpundit.com » theory

Archive for the ‘theory’ Category

Coercion and Social Cohesion

Thursday, December 26th, 2013

(by Adam Elkus)

Reader PRBeckman left a very great comment on my “Legibility at War” post, placing the WWI draft effort in perspective:

The federal government wanted to conscript millions of eligible men, but it had no information about those men and it lacked the institutions and money to gather that information so it depended upon private, voluntary organizations to fill the gaps. This is where the culture of voluntary associations reveals its dark side. The army’s estimate suggested that perhaps 3 million men never registered at all. This illegibility was a great dilemma and that’s where voluntary associations came in. Americans of this era are famous for their prolific creations of associations of every kind. You would think that would be a good thing except that they too often veered into vigilantism. These organizations were populated by people who weren’t themselves eligible for the draft, but they saw it as their duty to ensure that those who were eligible weren’t shirking. Organizations were formed all over the country, the most prominent being the American Protective League which counted 250,000 members. In 1917 and 1918 the APL and these other organizations, in collaboration with federal, state & local gov’ts, ran “slacker raids” to try to find those men who were eligible but who hadn’t registered. The accounts of these raids are frightening. The raids varied in size but they culminated in a massive operation in New York City on September 3-5, 1918:

“The APL later estimated that somewhere between twenty thousand and thirty thousand men participated: city police, government agents from the Department of Justice, more than two thousand soldiers and one thousand sailors, and thousands of American Protective League operatives. For three days they scoured the city’s streets and public places interrogating somewhere between 300,000 and 500,000 men. A man who lacked a draft registration or classification card found himself escorted by these self-appointed authorities to the nearest police station.”

They surrounded the “exits and entrances of every train, ferry, subway” station, “cordoning off whole blocks and interrogation men on the street. Later they raided theaters, saloons, billiard parlors, and boarding houses. Sailors wandered through the city’s restaurants moving from table to table inspecting the cards of diners.”

All the consequence of trying to achieve ’legibility’. And it would have an impact on concepts of citizenship, changing how citizens interacted with their government. The WW1 period was the transition era from the “illegible,” “wild and unruly forest”-era of citizenship to one that has taken on “a more legible shape.”

It’s worth pondering this when we hear endless appeals from pundits about how if our politicians and partisans were only forced to abandon their substantive political differences and get together, if our populace was regimented by a peacetime draft unconnected to urgent military danger for the purpose of social cohesion, we would somehow be a more perfect union. John Schindler rightly dispenses with these ideas:

A Swiss-style mass reserve force would make a great deal of sense if the United States worried about actual invasion from Canada or Mexico, something which even Sheriff Joe Arpaio doesn’t think is a realistic threat. Otherwise, not so much

Moreover, what would the U.S. military do with all those people? Since, unless you want to replicate the worst features of the pre-1973 draft, when flimsy exemptions abounded that privileged the privileged, the Selective Service system would have to direct millions of young men (and women too? how, in gender-equal 21st century America, could they be excluded?) into the forces. Even allowing that a high percentage of young people would be kept out on grounds of rising obesity and general idiocy that are spreading in wildfire fashion among American youths – many place that number at seventy-five percent unfit for military service these days – the Pentagon would need to find lots of make-work work for many big battalions of teenagers.

I don’t hear anyone suggesting a draft period of two years, as it was before 1973, so we’d be talking about a one year – twelve months – service period at most (Austria is down to six months coerced service, as a reference point, which has limited functional utility for the active forces.). Which would mean the U.S. military would have to invest in a vast training system resulting in lots of units filled with half-trained troops plus many others counting the days until they get out. It’s not difficult to see why you hardly ever meet career military types, of any rank, with any enthusiasm for restoring peacetime conscription.

Schindler acribes this to the utopian dreams of pundits that never had to endure military discipline themselves but want someone else’s sons and daughters to do it. However, even this is actually too charitable. I wrote and scrapped a column for War on the Rocks that analyzed this at length (it was getting too dense for a typical op-ed format) and I came to the conclusion that there is actually an strong element of authoritarianism in this.

The idea is that, in essence, with a regimented body of Americans we have cohesion again — cohesion, however, defined by the pundit’s own views about what politics America ought to have. What Dana Milibank’s column (which Schindler’s column rebuts) amounts to is the idea that a regimented America is one that will be more likely to agree with his own subjective political beliefs. Key is his sentence at the end that the ultimate goal of this would be to do undo the damage of “self-interested leaders” and the fact that the shutdown was the impetus for his column:

It’s no coincidence that this same period has seen the gradual collapse of our ability to govern ourselves: a loss of control over the nation’s debt, legislative stalemate and a disabling partisanship. It’s no coincidence, either, that Americans’ approval of Congress has dropped to just 9?percent, the lowest since Gallup began asking the question 39 years ago.

If partisanship is what regimentation seeks to cure, than the unspoken assumption is that a drafted public is one more likely to share Dana Milbank’s view of American governance. Let us be direct: his view of governance is one that conflates ideological disagreement (combined with the particularities of the US system) with pettiness and flaws of character. And the implication is that regimentation, authority, and discipline will reduce disorder and make American politics legible to him and other observers — like the China-fetishizing New York Times columnist Thomas Friedman.

Don’t get me wrong, I found the shutdown disturbing too. I dislike partisanship as well. I think that the shutdown was also a failure of American governance. But it had complex structural causes,  not some sudden and simplistic deprecation in the character of Americans raised on butter instead of guns. Structure, particularly when combined with ideology, matters. And we should start being very careful when an intellectual avoids existing structural analysis, warns of societal decadence , and declares that we must regiment ourselves and quash disagreement to save the polity. We should particularly be concerned when said intellectual creates a mono-causal explanation for a complex set of social problems and declares we must regiment ourselves and quash disagreement.

In any event I’d rather have vigorous partisanship and democracy (even if it results in gridlock and partisanship) than the kind of America Milbank seemingly wants to build. And knowledge of history should make us very cautious about the constant of the intellectual proposing coercion for the sake of order, cohesion, and discipline in society. Diversity builds robustness and strength, and centralization and regimentation can have substantial costs.

Legibility at War

Monday, December 23rd, 2013

(by Adam Elkus)

Apropos of a conversation I had with infosec provocateur The Grugq last night and a previous conversation with Nick Prime, a short comment on this piece on US covert aid to Colombia (mostly quotations from others):

Only then would Colombian ground forces arrive to round up prisoners, collecting the dead, as well as cellphones, computers and hard drives. The CIA also spent three years training Colombian close air support teams on using lasers to clandestinely guide pilots and laser-guided smart bombs to their targets. Most every operation relied heavily on NSA signal intercepts, which fed intelligence to troops on the ground or pilots before and during an operation. “Intercepts .?.?. were a game changer,” said Scoggins, of U.S. Southern Command. The round-the-clock nature of the NSA’s work was captured in a secret State Department cable released by WikiLeaks.

In the spring of 2009, the target was drug trafficker Daniel Rendon Herrera, known as Don Mario, then Colombia’s most wanted man and responsible for 3,000 assassinations over an 18-month period. “For seven days, using signal and human intelligence,” NSA assets “worked day and night” to reposition 250 U.S.-trained and equipped airborne commandos near Herrera as he tried to flee, according to an April 2009 cable and a senior government official who confirmed the NSA’s role in the mission.

The piece mainly focuses on the use of intelligence, precision weapons, and targeting to kill off key FARC leaders, even if the quoted paragraph talks about a drug lord. I included it because it was the most succinct summary of the methodologies used. What the piece really shows is the exporting of “industrial” counterterrorism and counter-guerrilla targeting methods pioneered in Iraq by special operations forces. These methods differ from older ones in Vietnam in their speed and technological sophistication, and they differ from the “Killing Pablo” mission the sheer scale of the problem (a resilient insurgent group, not just a drug kingpin). And it’s based in large part on metadata, as Jack McDonald argued:

For me, the importance of Prism, and like efforts, isn’t the question of government invasions of privacy, but rather the ability of the government to use violence against a population. Regardless of the strategic end, analysis of metadata allowed the American government to pull apart Baghdad bomb networks in a way that would have been far more difficult without it, if not impossible. If a couple of thousand special forces soldiers could do that in a foreign country, think what the same capability could do in a domestic context. This capability, I think, is what re-writes the social contract in favour of the government. The reason for this is that it alters the latent balance of violence between the state and the population. I think, however, that this takes place alongside another changing relationship, which is the balance of violence between individuals and the population. In the security/liberty debate we tend to focus on the former, sometimes forgetting the latter. We don’t like big states because they can oppress us, but at the same time, these days, individuals can do that, too.

The NSA caper is one outgrowth of the increasing legibility of social systems (in one respect) that the rise in graph analysis technologies, databases, and improved intelligence collection techniques brings. Here’s a bit on legibility, with Venkatesh Rao riffing off James C. Scott’s Seeing Like A State: 

The book is about the 2-3 century long process by which modern states reorganized the societies they governed, to make them more legible to the apparatus of governance. The state is not actually interested in the rich functional structure and complex behavior of the very organic entities that it governs (and indeed, is part of, rather than “above”). It merely views them as resources that must be organized in order to yield optimal returns according to a centralized, narrow, and strictly utilitarian logic. The attempt to maximize returns need not arise from the grasping greed of a predatory state. In fact, the dynamic is most often driven by a genuine desire to improve the lot of the people, on the part of governments with a popular, left-of-center mandate. Hence the subtitle (don’t jump to the conclusion that this is a simplistic anti-big-government conservative/libertarian view though; this failure mode is ideology-neutral, since it arises from a flawed pattern of reasoning rather than values).

The book begins with an early example, “scientific” forestry (illustrated in the picture above). The early modern state, Germany in this case, was only interested in maximizing tax revenues from forestry. This meant that the acreage, yield and market value of a forest had to be measured, and only these obviously relevant variables were comprehended by the statist mental model. Traditional wild and unruly forests were literally illegible to the state surveyor’s eyes, and this gave birth to “scientific” forestry: the gradual transformation of forests with a rich diversity of species growing wildly and randomly into orderly stands of the highest-yielding varieties. The resulting catastrophes — better recognized these days as the problems of monoculture — were inevitable.

The picture is not an exception, and the word “legibility” is not a metaphor; the actual visual/textual sense of the word (as in “readability”) is what is meant. The book is full of thought-provoking pictures like this: farmland neatly divided up into squares versus farmland that is confusing to the eye, but conforms to the constraints of local topography, soil quality, and hydrological patterns; rational and unlivable grid-cities like Brasilia, versus chaotic and alive cities like Sao Paolo. This might explain, by the way, why I resonated so strongly with the book.  The name “ribbonfarm” is inspired by the history of the geography of Detroit and its roots in “ribbon farms” (see my About page and the historic picture of Detroit ribbon farms below).

 

Metaheuristics of War

Monday, December 23rd, 2013

(by Adam Elkus)

I have been thinking about the problem of the “principles of war,” and various military authors’ differing takes on the viability of the concept. This is perhaps the best way to respond to the thought-provoking post Lynn Rees fashioned out of fragments of our gchat conversations.

Principles of war remain part of military manuals the world over, despite the fact that historical work has exposed substantial variance in their content. Principles of war evolve in time. John Alger’s work in particular is very interesting on this question. The basic pattern was, as one reviewer of Alger’s book argued, a canonization of Napoleonic principles followed by a grafting of midcentury combined arms warfare onto those already canonized Napoleonic principles. However, this element of relative consensus proved to be short-lived.

There has been widespread debate over whether “principles of war” are still valid for the so-called information age or irregular warfare. Military theorist William F. Owen‘s praise for Robert Leonhard’s late 1990s information-age update of the principles caused me to read it in college, and I found it very enlightening if overly optimistic about Transformation-era technologies. The principles of war are also being perpetually re-defined in countless books, articles, military college student monographs, and PowerPoint slides.

The way principles of war became proxies for principles of Napoleonic warfare leads us to question if there can be principles of war that generalize.  If we take Clausewitz’s injunction about politics seriously, then we realize while war may have a underlying logic everything else will vary. Hence the problem with Basil Liddell-Hart’s book On Strategy — it tortures every historical example so thoroughly until it yields to supporting the indirect approach. A recent criticism of John Boyd recently elucidated this point as well. Boyd indulges in the conceptual equivalent of German attrition strategy at Verdun to force military history to conform to his PowerPoint magnum opus. Not surprisingly, Boyd inflicts grave losses on his opponent but is unable to extract too much strategic advantage relative to his own costs.

To seek time-invariant principles of war  risks indulging in a John Yoo approach to military history . Indeed, books like Liddell-Hart’s own Great Captains Unveiled waterboard great military personages like Subotai and De Saxe until they cry “I’ll talk! I’ll talk! I won because I used the indirect approach! Just make the pain stop!”  Torture is immoral and ineffectual in public policy, so why apply it to military history?

So what to do? One solution is try to boil principles of war down to pithy nubs stripped of unnecessary detail that express timeless truths about the “best practices” of warfighting — and build a doctrinal scaffolding around them. It would prune even highly abstract principles of war seen in doctrine down to more defensible levels of abstraction. But this idea suffers from several problems.

First, we dramatically overstate our current ability to tell what is “timeless.” That is the core of Rees’ recent entry – we are far more confused than we believe. And if the current, aphoristic principles of war were enough, would we see such a frenzy to re-define the terminology? It strikes me that what defense professionals often seek is a way to take principles down from the 747 jet flight level to the granular world of practice. As a result, they often turn to vulgar novelty over tradition when they are really searching for a process that might help them navigate the mismatch between supposed timeless principles and the actual problems they face.

Traditionalists (often correctly) believe this desire for novelty stems from fads, pressure to conform to political or bureaucratic directives, and personal empire-building. But in the last 12 years there has been a sincere outpouring of angst from soldiers, intelligence analysts, and civilian policy analysts in the government sector who find that principle of war aphorisms are not enough. One might not agree with Emile Simpson’s contentious take on war and politics, but he wrote the book because so-called timeless truths obviously did not help Simpson do his military job in Afghanistan. And I have often seen Mark Safranski argue here over the years that the concept of Fourth Generation Warfare was necessary as a forcing mechanism to get the US military to adapt to challenges it faced in Iraq and Afghanistan.

It is tempting to respond to this by saying “they need to read ___ old strategy master  I like and study military history in the subjective way I like until they can understand strategy.” But this is a recipe for indoctrination since “understanding” = agreeing with old strategy master + the aforementioned fuzzy and didactic approach to extracting timeless or eternal ideas from military history. Instead, we might introduce metaheuristics of war as a complementary concept to the principles of war:

Metaheuristics is a rather unfortunate term often used to describe a major subfield, indeed the primary subfield, of stochastic optimization. Stochastic optimization is the general class of algorithms and techniques which employ some degree of randomness to find optimal (or as optimal as possible) solutions to hard problems. Metaheuristics are the most general of these kinds of algorithms, and are applied to a very wide range of problems.

What kinds of problems? In Jacobellis v. Ohio (1964, regarding obscenity), the United States Supreme Court Justice Potter Stewart famously wrote,

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

Metaheuristics are applied to I know it when I see it problems. They’re algorithms used to find answers to problems when you have very little to help you: you don’t know what the optimal solution looks like, you don’t know how to go about finding it in a principled way, you have very little heuristic information to go on, and brute-force search is out of the question because the space is too large. But if you’re given a candidate solution to your problem, you can test it and assess how good it is. That is, you know a good one when you see it.

Sean Luke, Essentials of Metaheuristics, (self-published lecture notes), 2013, 7.

Metaheuristics are not heuristics of heuristics, as Luke notes in a parenthetical comment. Rather, they are algorithms that select useful solutions for problems under the difficult conditions Luke specifies in the above quote. Let’s see an example:

For example: imagine if you’re trying to find an optimal set of robot behaviors for a soccer goalie robot. You have a simulator for the robot and can test any given robot behavior set and assign it a quality (you know a good one when you see it). And you’ve come up with a definition for what robot behavior sets look like in general. But you have no idea what the optimal behavior set is, nor even how to go about finding it.

The simplest thing you could do in this situation is Random Search: just try random behavior sets as long as you have time, and return the best one you discovered. But before you give up and start doing random search, consider the following alternative, known as Hill-Climbing. Start with a random behavior set. Then make a small, random modification to it and try the new version. If the new version is better, throw the old one away. Else throw the new version away. Now make another small, random modification to your current version (which ever one you didn’t throw away). If this newest version is better, throw away your current version, else throw away the newest version. Repeat as long as you can.

Hill-climbing is a simple metaheuristic algorithm. It exploits a heuristic belief about your space of candidate solutions which is usually true for many problems: that similar solutions tend to behave similarly (and tend to have similar quality), so small modifications will generally result in small, well-behaved changes in quality, allowing us to “climb the hill” of quality up to good solutions. This heuristic belief is one of the central defining features of metaheuristics: indeed, nearly all metaheuristics are essentially elaborate combinations of hill-climbing and random search.

One must use caution. When Clausewitz uses a metaphor, he does so because it helps us understand some dimension of the problem being discussed — not because a Center of Gravity in war maps exactly onto the meaning of the Center of Gravity in physics. Boyd does not make this distinction, and thus is vulnerable to criticisms from those that accurately point out that his interpretation of scientific concepts do not match their original usage. The level of abstraction I am discussing with in this post must be qualified in this respect, as I hope to avoid repeating Boyd’s mistake.

However, the following aspects of metaheuristics are still appealing in abstract. In many real-world problems, we do not know what an optimal solution looks like. We don’t know how to find it. We have a nub of information we can use, but not much more. Most importantly, the space of possible solutions is too large for us to just use brute force search for an answer:

A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard, and, for each arrangement, check whether each (queen) piece can attack any other.

While hill-climbing and random-search are inherent in most metaheuristics, there are different types of metaheuristic algorithms for different problems with varying performance in climbing the “hill of quality.” Hence it is customizable and recognizes variation in performance of methods. Some methods will perform well on some problems, but will get stuck at a local optima instead of a peak when faced with others.

One gigantic caveat: the idea of peaks and valleys in the solution space is derived from the assumption of a static, not dynamically evolving, landscape of candidate solutions. A perfect example is the application of the Ant Colony Optimization method to the notoriously hard Traveling Salesman Problem or the use of genetic algorithms to optimize the Starcraft tech tree’s build orders. When the solution space you are searching and climbing evolves in time, algorithms that assume a static landscape run into problems.

However, this is also why (in more mathematically dense language) nailing down principles of war is so perilous.  A solution that you might have used a principle of war to get to is  fine at time T. But it loses validity as we shift to T+1 and tick upwards towards T+k. And should you use a principle that better fits war’s grammar in 1830 than 2013, then you are even more screwed.

The advantage of metaheuristics of war compared to principles of war is that, while both consider solutions to problems with discrete (not continuously shifting) solution landscapes, metaheuristics are about how you find solutions. Hill-climbing is (oversimplified) method of moving through solutions that exploits heuristic information, and random search is  (also oversimplified) “try and see what happens.” The process of a metaheuristic involves a combination of both.

In contrast, principles of war are not really a process as much as a set of general guidelines designed to dramatically and a priori shrink the possible space of solutions to be considered in ways far more sweeping than hill-climbing. They imply a very, very restricted set of solutions while still being too vague to help a practitioner think about how the solutions fit the problem. Principles of war generally say to the practitioner, “generally, you do ___ but how you apply this is up to your specific situation and needs.” It has a broad set of do’s and don’ts that — by definition — foreclose consideration of possible solutions when they conflict with a given principle.

Yes, they are suggestions not guidelines, but the burden of proof is on the principle-violating solution, not the principle of war.  It may be that many problems will require flagrantly violating a given principle. The Royal Navy’s idea about distributing its forces to deal with the strategic problem posed by early 20th century imperial geopolitics potentially runs afoul of several principles of war, but it still worked. Finally, many principles of war as incorporated in military instruction are shaped more by cultural bias than timeless warfighting ideas.

As noted previously, metaheuristic algorithms are flexible. Different metaheuristics can be specified for differing problems. Additionally, when we consider past military problems (which the didactic teaching of principles of war concerns), metaheuristics can serve as alternative method for thinking about canonical historical military problems. Algorithms are measured against benchmark problems. One can consider abstract “benchmark” military problems and more specific classes of problems. By doing so, we may shed light on conditions impacting the usefulness of various principles of war on various problems of interest.

I will stress again that the loose notion of metaheuristics of war and the the principles of war should be complementary, not an either-or. And they can be combined with methods that are more interpretive and frame-based, since you will not be able to use a metaheuristic without having the “I know it when I see it” understanding Luke referenced in the beginning of his quoted text. On a similar note, I’ll also stress that an algorithm makes up only one part of a software program’s design pattern. A strategy or strategic concept is a larger architecture (e.g. a “strategy bridge“) that cannot simply be reduced to some narrow subcomponent — which is how the principles of war have always been understood within the context of strategic thought.

That being said…….what about war in real time, the dynamic and nonlinear contest of wills that Clausewitz describes? Note the distinction between the idea of principles of war that reasonably explain a past collection of military problems/offer guidance to understanding reasonably well-known military problems and the conceptual ability to understand the underlying dynamics of a specific present or future military contest.

The principle of objective, unity of command, or mass will not tell you much about the context of the strategic dilemma Robert E. Lee faced as a Confederate commander because geography, technology, ideology, state policy, the choices of neutral states, etc all structured his decision. They are much better when applied to the general class of problem that Lee’s dilemma could be abstracted into.

This is the difference between Clausewitz’s “ideal” and “real” war. Ideal war lacks the constraints and context of real war, and real war is something more than the sum of its parts. For example, maneuverists often argued that the US should implement an German-style elastic defense to defeat the Soviets in Central Europe. But such an solution, while perhaps valid in the abstract, would not be tolerated by European coalition partners that sought to avoid another spate of WWII-like demolition of their homelands.

Principles of war tell us very little about the Trinity’s notion of passion, reason, and chance, or the very political, economic, geographic, and technological conditions that might allow us to understand how Clausewitz’s two interacting “duel” partners move from the start of the match to conclusion. We need to think about how the duel plays out in time. And for reasons already explained metaheuristics also have some big limitations of their own with respect to dynamically evolving solution spaces.

An entirely different set of conceptual tools is needed, but that’s a problem for another post. For now, I leave you with a NetLogo implementation of Particle Swarm Optimization. Look at those NetLogo turtles go!!

Master and (Drone) Commander?

Monday, December 16th, 2013

(by Adam Elkus)

How to think about the shape of future, high-end conventional conflict? Military robotics seems to be a point of recent focus. Take Tom Ricks’ latest on the American military:

By and large, the United States still has an Industrial Age military in an Information Age world. With some exceptions, the focus is more on producing mass strength than achieving precision. Land forces, in particular, need to think less about relying on big bases and more about being able to survive in an era of persistent global surveillance. For example, what will happen when the technological advances of the past decade, such as armed drones controlled from the far side of the planet, are turned against us? A drone is little more than a flying improvised explosive device. What if terrorists find ways to send them to Washington addresses they obtain from the Internet?

Imagine a world where, in a few decades, Google (having acquired Palantir) is the world’s largest defense contractor. Would we want generals who think more like George Patton or Steve Jobs — or who offer a bit of both? How do we get them? These are the sorts of questions the Pentagon should begin addressing. If it does not, we should find leaders — civilian and in uniform — who will.

I quote (as I often do) from John Robb’s excellent analysis of drone swarms because Robb has produced one of the few classics in the emerging military literature on the future of drone warfare. Here, Robb rhapsodizes about the future drone swarm commander and his unlikely origins in the civilian (and South Korea-dominated) Starcraft game series:

Here are some of the characteristics we’ll see in the near future:

  • Swarms.  The cost and size of drones will shrink.  Nearly everyone will have access to drone tech (autopilots already cost less than $30).  Further, the software to enable drones to employ swarm behavior will improve.  So, don’t think in terms of a single drone. Think in terms of a single person controlling hundreds and thousands.
  • Intelligence.  Drones will be smarter than they are today.  The average commercial chip passed the level of insect intelligence a little less than a decade ago (which “magically” resulted in an explosion of drone/bot tech).  Chips will cross rat intelligence in 2018 or so.  Think in terms of each drone being smart enough to follow tactical instructions.
  • Dynamism.  The combination of massive swarms with individual elements being highly intelligent puts combat on an entirely new level.  It requires a warrior that can provide tactical guidance and situational awareness in real time at a level that is beyond current training paradigms.

Training Drone Bonjwas

Based on the above requirements, the best training for drones (in the air and on land) isn’t real world training, it’s tactical games (not first person shooters).  Think in terms of the classic military scifi book, “Ender’s Game” by Orson Scott Card. Of the games currently on the market, the best example of the type of expertise required is Blizzard’s StarCraft, a scifi tactical management game that has amazing multiplayer tactical balance/dynamism.  The game is extremely popular worldwide, but in South Korea, it has reached cult status.  The best players, called Bonjwas, are treated like rock stars, and for good reason:

  • Training of hand/eye/mind.  Speeds of up to 400 keyboard mouse (macro/micro) tactical commands per minute have been attained.  Think about that for a second.  That’s nearly 7 commands a second.
  • Fight multi-player combat simulations  for 10-12 hours a day.  They live the game for a decade and then burn out.   Mind vs. mind competition continuously.
  • To become a bonjwa, you have to defeat millions of opponents to reach the tournament rank, and then dominate the tournament rank for many years.  The ranking system/ladder that farms new talent is global (Korea, China, SEA, North America, and Europe), huge (millions of players), and continuous (24x7x365).

That’s the tactics—but what about the strategy? Robb calls it a “tactical management game,” which is correct. We can discern a bare shell of the “strategy” we normally discuss in the higher level decisions concerning the composition and deployment of the force. And here we also see a different kind of strategic control at play, one much more having to do with the Cold War science of operations research.

One important cognitive aspect of Starcraft that has been automated is the evolution up the tech tree. The tech tree that the player must advance up in order to produce needed units, accessories, and tactics is deterministic, perhaps reflecting the real-world convergence toward a “modern” style of high-end conventional tactics. Starcraft as a game represents the purely tactical considerations of warfare as an elaborate game of rock-paper-scissors, in keeping with Clausewitz’s statement that tactics can be considered closer to science than other aspects of warfare.

It is a reflection of Starcraft‘s deterministic structure that the tech tree “build orders”, the most crucial element of Starcraft‘s mode of war, can be automated. A genetic algorithm infamously was derived to optimize build orders. But this is only possible because the build orders themselves optimize a very small piece of the overall problem, and one made possible by determinism baked into the game.

The use of genetic algorithms to produce build orders also interestingly enough mirrors the overall social, economic, and organizational structure that produces a champion Starcraft player. In the 1980s, Robert Axelrod created an algorithm tournament designed to find a best-performing strategy to the canonical “Prisoner’s Dilemma” in game theory. Using the tournament selection mode of genetic algorithms, Axelrod iteratively weeded out “unfit” strategies until a dominant strategy was found. Perhaps the process that Robb describes is quite literally “tournament selection” that produces an optimal Starcraft player type.

The most important element of strategy — translating organized violence into political payoff — is mostly absent. Starcraft demands the intricate steps needed to prepare the weapon itself (build older optimization) and immaculate skill at firing it (in-game command) but not the problem of ensuring that the violence make political sense. There is no security dilemma caused by the threat of Zergling rushes. 🙂

Because it is a videogame, Starcraft as experienced by the player is nothing close to the overall difficulty, uncertainty, and complexity implied by the overall in-game universe of factions, technologies, and personalities. The level of cognitive difficulty that must be dealt with is kept on the order of something that a single player can reason through. Of course, in even in the “closed” world of real Cold War military science (which Starcraft has eerie similarities to), this has been the stuff of military staffs, RAND and Hudson-like research groups, systems analysts, and supercomputers.

What about uncertainty and complexity? Depending on the game, the most important political-military decisions may not be up to the player. The transformative in-game decision to rebel against Arcturus Mengsk and create Raynor’s Raiders is not made by the player but by the grieving Jim Raynor.  In Starcraft: Brood War and Starcraft II, player choice becomes important in structuring the flow of action. When attacking Char in Starcraft II, the player must choose to either attack the enemy’s air support or ground elements. Both choices are presented are potentially valid depending on player preference. Many other individual choices lead to important distinctions in the shape of events. But the overall “basins of attractions” built into the game structure pull the player towards the same broad outcome regardless. That’s because the game universe and the creators’ demands is the overarching political-military context that determines the path of the war.

When it comes to multiplayer matches, online games in general make combat sport. That is why we dub the Korean Starcraft aces champions. They compete in a ritualized game with clear rules and all-powerful human gamemasters that create the game itself and instantiate their ideas of what an ideal combat sport represents in computer  code. Starcraft has much more in common with the Roman coliseum battles than the Roman army on campaign in some harsh European or Middle Eastern land. Of course, all online environments have weak points that are often exploited to offer advantage, but Starcraft‘s limited range of behavior makes it easier for game-masters to secure than the sprawling World of Warcraft or EVE Online. 

Though I have some serious misgivings about the ethical context of Ender’s Game as a novel, it also remarkably approximates the experience of game-playing in many real-time “strategy” games like StarcraftEnder himself, whom Robb analogizes, is a virtual virtuoso that spends most of his time in Ender’s Game unaware that the “training” simulations he is playing are actually the war he is training to fight in the first place. Hence one comes to wonder if the real genius is not necessarily entirely Ender, who supplies the cognitive firepower necessary to dominate Clausewitz’s “play of chance” on the battlefield. Rather, what about the men and women who organized  and equipped the fleet?  And of the politicians and generals that decided  the overall shape of the strategy that Ender executes, and infamously decided to authorize the genocide of the “Bug” aliens Ender exterminates with weapons of mass destruction?

This isn’t a strike against Robb’s idea that Starcraft is a metaphor for one part of future warfare. Robb himself states that Starcraft is tactical management, and it is as good an vision to contend with as any other. Changes in warfare that begin on the level of tactics have strategic implications. We already know that tactical virtuosity that might be so essential to victory in a closed environment with well-formulated rules are often counterbalanced by the problem of making those skills serve strategic effectiveness outside that environment. What kind of problems might arise for the hypothetical Starcraft-ish military bot commander?

The first problem to be surmounted is collective action. Multi-agent systems face similar coordination problems as seen in human relationships. The interdisciplinary field of algorithmic game theory has arisen to study how to create algorithmic mechanism design for solving many of these issues. Another problem lies in the conflict between speed of tactical execution and the slower-moving demands of strategy. The Cold War stories of commanders that decide to risk annihilation rather than launch nuclear forces on faulty signals tells that many strategic problems have to do not necessarily with the most efficient ways of employing violence but rather have to do with the control of military power. This question has in fact dominated most discussion about autonomous weapons.

Lastly, the most important insight that Robb’s piece gives us is that Starcraft is an social environment that produces novel behavior. It is the online wargaming medium itself and its speed and essentially social complexity that produces the Starcraft champion’s unique characteristics. Similarly, a certain Corsican arose from the cauldron of the “multi-player interaction” of an era caught between the emerging crest of “modern” warfare and the 18th century military system. Dubbed the “God of War,” he became the template for every 19th century commander to copy. The most important strategic problem implied by Robb’s blog is conceptualizing the range of behaviors produced by the unique military system that he sketches with Starcraft as inspiration.

Theory and Practice:

Friday, November 29th, 2013

[ by Charles Cameron — how theory works out in practice, and vice versa ]
.

Here’s the first of a pair of “patterns” of interest…

Theory contradicts practice:

— with thanks to my friend Peter van der Werff!

**

And here’s the second, a delicious serpent-bites-tail tweet re Egypt:

Practice contradicts theory:

**

Okay, there are two more items for the “pattern recognition” / “pattern language” archives…

And here’s wishing you all a Happy Black Friday — if that’s even concedivable!


Switch to our mobile site