zenpundit.com » ideas

Archive for the ‘ideas’ Category

Adding to the Bookpile

Sunday, February 9th, 2014

[by Mark Safranski, a.k.a. “zen“]
  

Cultures of War: Pearl Harbor / Hiroshima / 9-11 / Iraq by John Dower 

Berlin Diary: The Journal of a Foreign Correspondent, 1934-1941 by William Shirer

Moral Combat: Good and Evil in World War II by Michael Burleigh 

Picked up a few more books for the antilibrary.

Dower is best known for his prizewinning Embracing Defeat: Japan in the Wake of World War II, which unfortunately, I have never read.  Berlin Diaries I have previously skimmed through for research purposes but I did not own a copy. Shirer’s The Rise and Fall of the Third Reich: A History of Nazi Germany was an immensely bestselling book which nearly everyone interested in WWII reads at some point in time. I would put in a good word for Shirer’s lesser known The Collapse of the Third Republic: An Inquiry into the Fall of France in 1940 . It was a very readable introduction to the deep political schisms of France during the interwar and Vichy years which ( as I am not focused on French history) later made reading Ian Ousby’s Occupation: The Ordeal of France 1940-1944 more profitable.

I am a fan of the vigorous prose of British historian Michael Burleigh, having previously reviewed  Blood and Rage: A Cultural History of Terrorism here and can give a strong recommendation for his The Third Reich: A New History.  Burleigh here is tackling moral choices in war and also conflict at what Colonel John Boyd termed “the moral level of war” in a scenario containing the greatest moral extremes in human history, the Second World War.

The more I try to read, the further behind I fall!

R2P Debate Rising ( Part I.)

Friday, February 7th, 2014

I thought I would call the attention of the readership to a debate that has been ricocheting around different social media platforms on R2P (Responsibility to Protect“). I have dealt with the topic several times in the past, related to the ideas of Anne-Marie Slaughter, but not much recently until Victor Allen, over at The Bridge, put up an enthusiastic post:

Strong State, Weak State: The New Sovereignty and the Responsibility to Protect

The Responsibility to Protect doctrine represents a leap forward in accountability for states and does not infringe upon their sovereignty, as states are no longer held to be completely self-contained entities with absolute power over their populations. Rather, there is a strictly defined corpus of actions that begin the R2P process?—?a process that has different levels of corrective action undertaken by the international community in order to persuade, cajole and finally coerce states into actively taking steps to prevent atrocities from occurring within their boundaries. That R2P does not violate sovereignty stems from the evolution of sovereignty from its Westphalian form in the mid 17th century to the “sovereignty as responsibility” concept advanced by Deng, et al. Modern sovereignty can no longer be held to give states carte blanche in their internal affairs regardless of the level of suffering going on within their borders. This does not diminish state agency for internal affairs, but rather holds them responsible and accountable for their action and inaction regarding the welfare of their populations…

Victor’s post deserves to be read in full.

I did not agree with Victor’s framing of the legal character of state sovereignty, to put it mildly, nor his normative assessment of R2P.  Mr. Allen also described R2P somewhat differently than I have seen from other advocates, but I was less concerned by that as the concept does not seem to be presented with consistency by the community of  R2P advocates and theorists. Having seen similar theoretical debates over the years about angels dancing on pins over 4GW, constructivism, EBO, Network-centric Warfare, OODA,  Clausewitz’s remarkable trinity,  nuclear deterrence, preemptive war, COIN,  neoconservatism, free market economics, the agrarian origin of capitalism in England, Marxist theory etc. I am not too worried if Victor’s interpretation in its specifics is not ideologically perfect. It is representative enough.

I responded to Allen’s post somewhat crankily and with too much brevity:

R2P: Asserting Theory is not = Law 

….As far as premises go, the first point is highly debatable; the second is formally disputed by *many* states, including Russia and China, great powers which are permanent members of the UN Security Council; and the third bears no relation to whether a military intervention is a violation of sovereignty or not. I am not a self-contained entity either, that does not mean you get to forcibly enter my house.

That R2P does not violate sovereignty stems from the evolution of sovereignty from its Westphalian form in the mid 17th century to the “sovereignty as responsibility” concept advanced by Deng, et al. Modern sovereignty can no longer be held to give states carte blanche in their internal affairs regardless of the level of suffering going on within their borders.

Academic theorists do not have the authority to override sovereign powers (!) constituted as legitimized, recognized, states and write their theories into international law – as if an international covenant like the Geneva Convention had just been contracted. Even persuading red haired activist cronies of the American president and State Department bureaucrats to recite your arguments at White House press conferences does not make them “international law” either – it makes them “policy” – and that only of a particular administration. 

This riff  set off something of a reaction on Facebook in private groups and on Twitter as Mr. Allen, who I am sure is a fine gent, has a large set of common colleagues with me, some of whom are Boydians and all of whom are sharp strategic thinkers. Consequently,  Victor’s post(s) as well as mine and a later follow up by a “Leonidas Musashi” ( great nom de guerre)  made it into a high caliber defense forum as well as other sites online. My spleen-venting provoked the following rebuttal at The Bridge:

R2P: A Spectrum of Responses 

….Safranski’s final point about sovereignty as carte blanche seems to be a stealth argument for the principles of R2P:

States always could and did take military action in self-defense when disorders in neighboring states threatened their security or spilled over their border outright.R2P seeks to minimize harm caused by disorder through early action taken prior to conflicts spilling over borders that can potentially cause larger conflagrations, but more importantly, it recognizes that atrocities can happen entirely within the confines of a state, and that the international community will not allow them to continue unchecked. This recognition is easily seen in the rhetoric and discussions regarding rebels in both Libya and Syria. Libya is admittedly a flawed example of the use of R2P, with second-order effects seen in the Russian and Chinese opposition to UN-sanctioned stabilization operations in Syria, but that concern for the population first and the state second were common facets to both bear mentioning in the debate and illustrate the shifting nature of intervention and sovereignty. This shift is exemplified in the contrast between discussions in the UN General Assembly regarding Kosovo/East Timor and Syria: “most of the 118 states that mentioned Syria at the UN General Assembly in 2012 expressed concern about the population, up from less than a third who invoked Kosovo and East Timor in 1999… It is clear that a fundamental shift has taken place regarding humanitarian intervention and that more and more states embrace the broad values expressed by R2P.” (“Democracy, Human Rights, and the Emerging Global Order: Workshop Summary,” Brookings Institution, 2012)

Again, I caution about reading posts in full.

Here in this rebuttal Victor doubled down, which I admire because that is interesting, but with which I agree with even less because he seems to be far removed from how the world really works in terms of international relations, not merely in practice, but also in theory as well.  That said, his response deserves a much more serious reply than my first post evinced. I have been fiddling with one ( I seem to be moving slowly these days) but another voice – “Leonidas Musashi” – has entered the debate at The Bridge with a sharp retort against Allen’s conception of R2P:

Responsibility to Protect: Rhetoric and Reality 

….My main observation, however, is that the discussion thus far has been focused more on a “right” to protect than a “responsibility” to do so. The arguments indicate that a state has a responsibility to protect its people but takes for granted that third parties somehow inherit this responsibility when the state cannot fulfill it. There is a missing explanation here. The need to justify such efforts may seem callous, but a nation’s highest moral order is to serve its own citizens first. Such an explanation would certainly be a legitimate demand for a mother that loses a son who volunteered to defend his nation, or for a government entrusted by its people to use their resources to their own benefit. While it is often stated that the international community “should” intervene, explanation of where this imperative comes from is not addressed other than by vague references to modern states being interconnected. But this implies, as previously stated, a right based on the self-interest of states, firmly grounded in realistic security concerns, rather than any inherent humanitarian responsibility to intervene. Instability and potential spillover may very well make it within a nation’s vital interests to intervene in another country and pursuing humanitarian and human rights goals within the borders of another state may well be in a nation’s secondary interests. But if this is the case, the calculus of the political leadership will determine if pursuing this goal is worth the cost/potential costs – as has been done in such cases as North Korea, Iran, Zimbabwe, Tibet and Syria. In either case, the decision is determined by what is in the nation’s interests, a reality that makes R2P not a mandate, but a merely a post hoc justification for interventions that do occur.

Leonidas makes many good points, in my view, but the intellectual fungibility of R2P as a concept, its elastic and ever evolving capacity to serve as a pretext for any situation at hand is the most important, because it is potentially most destabilizing and threatening to other great powers with which the United States has to share the globe. In short, with great responsibilities come greater costs.

In part II. I will lay out a more methodical case on the intellectual phantom that is R2P.

“For the Soldiers of the Future”

Thursday, December 26th, 2013

(by Adam Elkus)

One of my favorite television shows when I was younger was the Japanese sci-fi anime Gundam Wing. The characterization was awful, the giant robots were kind of lame, and the fights often were not all that suspenseful. However, it had a very interesting social and political universe that was far more sophisticated than your average Toonami fare. I remember one episode in particular, now that discussion has turned to the ever-topical future of war and technology.

In a Earth Sphere Alliance military base on Corsica, an special operations officer named Walker greets Gundam‘s antagonist Zechs. Zechs has come to inspect an old prototype mobile suit that Zechs and Walker both believe holds the key to understanding the terrifying new and poorly understood Gundam mobile suits that have been annihilating Alliance bases left and right. The base’s foolish commander, having been forced to cease production of mobile suits due to a terrorist attack on the facility, stages a large display of force with base units to demonstrate that he is in control. The implied purpose is to grandstand to the special operations group that Zechs and Walker belong to, demonstrating that the regular army can do hold the base without the help of the “specials.”

At one point, Walker asks Zech to take the prototype suit from the base with him. Zechs, knowing that the Gundam will likely attack, asks Walker if he is going to die for him. Walker responds that he is following Zechs’ example and fighting for the soldiers of the future. Sure enough, a Gundam does arrive and Walker and his special operations unit suicidally fight to allow the base commander and Zechs to escape. Walker, in commanding his men to fight on despite the certainty of destruction, quite literally casts it as a struggle for the soldiers of the future. The combat data that the fight will produce will help the military fight the Gundams later. And Walker also wants Zechs and the prototype to escape for similar reasons. Zechs himself sorrowfully departs, knowing that he has effectively doomed Walker.

When thinking about World War I, I often see a lot of Walkers. Many of the military theorists, soldiers, and technologists could see nearly all of the challenges of future warfare stemming from C3I, logistics, campaign design, and tactics. Walker most reminds me of Ardant Du Picq, both in his interest in the future of war and untimely end. The problem all of the prewar era’s military theorists faced was that they were caught between something very old and familiar and something new and terrifying — much like the juxtaposition of the proto-Gundam Zechs inspects and the actual Gundam that kills Walker and his team (thus generating combat data). The familiar is tangible, the future is patchy and a black box. Still, that isn’t exactly why WWI was such a slaughterhouse.

An interesting contrast to Gundam is seen in another anime I watched recently, Night Raid 1931.  Set in the 1930s, the anime’s antagonist is a supernaturally empowered Imperial Japanese Army military officer who forsees World War II and the use of the atomic bomb. Prophecy is a very big theme throughout Night Raid — a oracle-like woman is used by the closest echelons of the Japanese government and military to make decisions about war and peace. There is something fitting about the idea that the prime source of information for decision is an esoteric and religiously based strategic forecaster.

The antagonist, afraid of the consequences of world warfare, attempts to enlist the peoples of Southeast Asia in revolt against both Japan and the colonial powers to produce a new order. He takes drastic measures to create his own prototype atomic weapon — which he plans to utilize on Shanghai in order to force the world powers (all of whom have settlements there) to take actions that will demonstrate the deterrent power of his new weapon. He is foiled, but the protagonists all understand that they have only postponed the inevitable.

The perspective in Night Raid is one in which the future is deterministic — even if it cannot be predicted completely. The initial conditions are clear — some sequence of events is on the horizon, ending in the usage of the atomic bomb. The antagonist only can glimpse a very hazy outline of this vision, and he tries and fails to prevent it. Undoubtedly the fact that he tried and failed influenced the outcome somewhat — but the anime implies WWII happens anyway (and the bomb presumably does as well).

The deterministic perspective in Night Raid is contrasted with Gundam 00, in which a Hari Seldon-like figure creates an organization for carrying out a 200-year plan designed to result in a desired future and a massively powerful biological artificial intelligence agent to help plan and direct the process through the centuries. However, after he puts himself in suspended freeze to wait out the future, the components of his organization begin to develop different ideas about it. Factions develop and feud and 200 years later the desired future is very much in doubt.

Though the good guys win in the end (it’s TV), it is by no means implied that the initial conditions are sufficient to produce a deterministic outcome. The end outcome is an emergent product of contingent decisions by all of the anime’s political, military, and economic entities as well as the specific decisions and personalities of the main characters. In fact, there are many points in the anime in which complete derailment of the desired future are very plausible. The fact that the end leads to the heroes triumphant doesn’t necessarily say much about the probability of it occurring. The story tries to present it as such, but this can be dismissed as a narrative contrivance designed to impose a comfortable sense of signal to noise.

The question of what the future holds for war depends in part on how you view the nature of social systems. The key idea of Night Raid is a teleological climb to some higher mountain. Exactly how high no one really knows, but by the end of the anime they are sure that there is some peak much higher that they will ascend to. In contrast, Gundam 00 seems to imply that there are micro interactions that produce fleeting intermediate structures. Furthermore, the interaction between micro and intermediate levels produces a macroscopic outcome that then affects the micro level again.

The challenge is always to avoid the Black Swan problem. It is easy to impose a spurious coherence on past events that you believe gives them teleological order. Much of what Lynn Rees talks about is the problem of imposing such coherence with fuzzy and value-laden ideas about strategy. But as some commenters have noted in the legibility thread, legibility is at heart any process that we use to try to force the world to fit our own mental models. Every time we write history, we inherently distort reality into a soda straw view because no history can capture the complexity of the world as it once was. It is often ironic to see humanities thinkers make this very criticism about mathematical modeling and statistics, when if anything the process of imposing conceptual order on the past is far more fraught with peril than building a clearly specified computer model.

With this in mind, we can see another interesting distinction in the various anime series surveyed in this post. In Night Raid 1931, the antagonist attempts to force the future to fit his own mental model, and fails miserably. The deterministic nature of events is implied by his failure to get the anti-colonial groups to trust him and cooperate — something that could only happen after World War II. However, in Gundam 00 the very act of changing the future also imperils that future — the creation of a large organization to carry out the Foundation-esque dream inevitably splits into factions and personalities that try to twist the plan to fit their own ends.

To return to the Walker-WWI parallel in the beginning — what I’m coming to believe about WWI is not that the greatest risk is failing to see the future clearly or of not collecting the right data. It is that we do not give enough reflective thought to how our anticipations of the future also change it. The preparations of the various powers for war they knew would require large armies, mobilization networks, and speed famously complicated prewar diplomacy. And preparations for Cold War turning “hot” and the scientific and technical spawn they generated in turn also created the roots of American dominance and profitable technological industries today.

Much discussion about future war involves banning or regulating technologies, taking steps to insure that X or Y capability is preserved or scrapped, etc. But that focus renders invisible the problems involved in trying to force the future to be legible, as well as the interesting lack of reflexivity about the combination of predicting the future and seeking to alter it.

Legibility at War

Monday, December 23rd, 2013

(by Adam Elkus)

Apropos of a conversation I had with infosec provocateur The Grugq last night and a previous conversation with Nick Prime, a short comment on this piece on US covert aid to Colombia (mostly quotations from others):

Only then would Colombian ground forces arrive to round up prisoners, collecting the dead, as well as cellphones, computers and hard drives. The CIA also spent three years training Colombian close air support teams on using lasers to clandestinely guide pilots and laser-guided smart bombs to their targets. Most every operation relied heavily on NSA signal intercepts, which fed intelligence to troops on the ground or pilots before and during an operation. “Intercepts .?.?. were a game changer,” said Scoggins, of U.S. Southern Command. The round-the-clock nature of the NSA’s work was captured in a secret State Department cable released by WikiLeaks.

In the spring of 2009, the target was drug trafficker Daniel Rendon Herrera, known as Don Mario, then Colombia’s most wanted man and responsible for 3,000 assassinations over an 18-month period. “For seven days, using signal and human intelligence,” NSA assets “worked day and night” to reposition 250 U.S.-trained and equipped airborne commandos near Herrera as he tried to flee, according to an April 2009 cable and a senior government official who confirmed the NSA’s role in the mission.

The piece mainly focuses on the use of intelligence, precision weapons, and targeting to kill off key FARC leaders, even if the quoted paragraph talks about a drug lord. I included it because it was the most succinct summary of the methodologies used. What the piece really shows is the exporting of “industrial” counterterrorism and counter-guerrilla targeting methods pioneered in Iraq by special operations forces. These methods differ from older ones in Vietnam in their speed and technological sophistication, and they differ from the “Killing Pablo” mission the sheer scale of the problem (a resilient insurgent group, not just a drug kingpin). And it’s based in large part on metadata, as Jack McDonald argued:

For me, the importance of Prism, and like efforts, isn’t the question of government invasions of privacy, but rather the ability of the government to use violence against a population. Regardless of the strategic end, analysis of metadata allowed the American government to pull apart Baghdad bomb networks in a way that would have been far more difficult without it, if not impossible. If a couple of thousand special forces soldiers could do that in a foreign country, think what the same capability could do in a domestic context. This capability, I think, is what re-writes the social contract in favour of the government. The reason for this is that it alters the latent balance of violence between the state and the population. I think, however, that this takes place alongside another changing relationship, which is the balance of violence between individuals and the population. In the security/liberty debate we tend to focus on the former, sometimes forgetting the latter. We don’t like big states because they can oppress us, but at the same time, these days, individuals can do that, too.

The NSA caper is one outgrowth of the increasing legibility of social systems (in one respect) that the rise in graph analysis technologies, databases, and improved intelligence collection techniques brings. Here’s a bit on legibility, with Venkatesh Rao riffing off James C. Scott’s Seeing Like A State: 

The book is about the 2-3 century long process by which modern states reorganized the societies they governed, to make them more legible to the apparatus of governance. The state is not actually interested in the rich functional structure and complex behavior of the very organic entities that it governs (and indeed, is part of, rather than “above”). It merely views them as resources that must be organized in order to yield optimal returns according to a centralized, narrow, and strictly utilitarian logic. The attempt to maximize returns need not arise from the grasping greed of a predatory state. In fact, the dynamic is most often driven by a genuine desire to improve the lot of the people, on the part of governments with a popular, left-of-center mandate. Hence the subtitle (don’t jump to the conclusion that this is a simplistic anti-big-government conservative/libertarian view though; this failure mode is ideology-neutral, since it arises from a flawed pattern of reasoning rather than values).

The book begins with an early example, “scientific” forestry (illustrated in the picture above). The early modern state, Germany in this case, was only interested in maximizing tax revenues from forestry. This meant that the acreage, yield and market value of a forest had to be measured, and only these obviously relevant variables were comprehended by the statist mental model. Traditional wild and unruly forests were literally illegible to the state surveyor’s eyes, and this gave birth to “scientific” forestry: the gradual transformation of forests with a rich diversity of species growing wildly and randomly into orderly stands of the highest-yielding varieties. The resulting catastrophes — better recognized these days as the problems of monoculture — were inevitable.

The picture is not an exception, and the word “legibility” is not a metaphor; the actual visual/textual sense of the word (as in “readability”) is what is meant. The book is full of thought-provoking pictures like this: farmland neatly divided up into squares versus farmland that is confusing to the eye, but conforms to the constraints of local topography, soil quality, and hydrological patterns; rational and unlivable grid-cities like Brasilia, versus chaotic and alive cities like Sao Paolo. This might explain, by the way, why I resonated so strongly with the book.  The name “ribbonfarm” is inspired by the history of the geography of Detroit and its roots in “ribbon farms” (see my About page and the historic picture of Detroit ribbon farms below).

 

Metaheuristics of War

Monday, December 23rd, 2013

(by Adam Elkus)

I have been thinking about the problem of the “principles of war,” and various military authors’ differing takes on the viability of the concept. This is perhaps the best way to respond to the thought-provoking post Lynn Rees fashioned out of fragments of our gchat conversations.

Principles of war remain part of military manuals the world over, despite the fact that historical work has exposed substantial variance in their content. Principles of war evolve in time. John Alger’s work in particular is very interesting on this question. The basic pattern was, as one reviewer of Alger’s book argued, a canonization of Napoleonic principles followed by a grafting of midcentury combined arms warfare onto those already canonized Napoleonic principles. However, this element of relative consensus proved to be short-lived.

There has been widespread debate over whether “principles of war” are still valid for the so-called information age or irregular warfare. Military theorist William F. Owen‘s praise for Robert Leonhard’s late 1990s information-age update of the principles caused me to read it in college, and I found it very enlightening if overly optimistic about Transformation-era technologies. The principles of war are also being perpetually re-defined in countless books, articles, military college student monographs, and PowerPoint slides.

The way principles of war became proxies for principles of Napoleonic warfare leads us to question if there can be principles of war that generalize.  If we take Clausewitz’s injunction about politics seriously, then we realize while war may have a underlying logic everything else will vary. Hence the problem with Basil Liddell-Hart’s book On Strategy — it tortures every historical example so thoroughly until it yields to supporting the indirect approach. A recent criticism of John Boyd recently elucidated this point as well. Boyd indulges in the conceptual equivalent of German attrition strategy at Verdun to force military history to conform to his PowerPoint magnum opus. Not surprisingly, Boyd inflicts grave losses on his opponent but is unable to extract too much strategic advantage relative to his own costs.

To seek time-invariant principles of war  risks indulging in a John Yoo approach to military history . Indeed, books like Liddell-Hart’s own Great Captains Unveiled waterboard great military personages like Subotai and De Saxe until they cry “I’ll talk! I’ll talk! I won because I used the indirect approach! Just make the pain stop!”  Torture is immoral and ineffectual in public policy, so why apply it to military history?

So what to do? One solution is try to boil principles of war down to pithy nubs stripped of unnecessary detail that express timeless truths about the “best practices” of warfighting — and build a doctrinal scaffolding around them. It would prune even highly abstract principles of war seen in doctrine down to more defensible levels of abstraction. But this idea suffers from several problems.

First, we dramatically overstate our current ability to tell what is “timeless.” That is the core of Rees’ recent entry – we are far more confused than we believe. And if the current, aphoristic principles of war were enough, would we see such a frenzy to re-define the terminology? It strikes me that what defense professionals often seek is a way to take principles down from the 747 jet flight level to the granular world of practice. As a result, they often turn to vulgar novelty over tradition when they are really searching for a process that might help them navigate the mismatch between supposed timeless principles and the actual problems they face.

Traditionalists (often correctly) believe this desire for novelty stems from fads, pressure to conform to political or bureaucratic directives, and personal empire-building. But in the last 12 years there has been a sincere outpouring of angst from soldiers, intelligence analysts, and civilian policy analysts in the government sector who find that principle of war aphorisms are not enough. One might not agree with Emile Simpson’s contentious take on war and politics, but he wrote the book because so-called timeless truths obviously did not help Simpson do his military job in Afghanistan. And I have often seen Mark Safranski argue here over the years that the concept of Fourth Generation Warfare was necessary as a forcing mechanism to get the US military to adapt to challenges it faced in Iraq and Afghanistan.

It is tempting to respond to this by saying “they need to read ___ old strategy master  I like and study military history in the subjective way I like until they can understand strategy.” But this is a recipe for indoctrination since “understanding” = agreeing with old strategy master + the aforementioned fuzzy and didactic approach to extracting timeless or eternal ideas from military history. Instead, we might introduce metaheuristics of war as a complementary concept to the principles of war:

Metaheuristics is a rather unfortunate term often used to describe a major subfield, indeed the primary subfield, of stochastic optimization. Stochastic optimization is the general class of algorithms and techniques which employ some degree of randomness to find optimal (or as optimal as possible) solutions to hard problems. Metaheuristics are the most general of these kinds of algorithms, and are applied to a very wide range of problems.

What kinds of problems? In Jacobellis v. Ohio (1964, regarding obscenity), the United States Supreme Court Justice Potter Stewart famously wrote,

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

Metaheuristics are applied to I know it when I see it problems. They’re algorithms used to find answers to problems when you have very little to help you: you don’t know what the optimal solution looks like, you don’t know how to go about finding it in a principled way, you have very little heuristic information to go on, and brute-force search is out of the question because the space is too large. But if you’re given a candidate solution to your problem, you can test it and assess how good it is. That is, you know a good one when you see it.

Sean Luke, Essentials of Metaheuristics, (self-published lecture notes), 2013, 7.

Metaheuristics are not heuristics of heuristics, as Luke notes in a parenthetical comment. Rather, they are algorithms that select useful solutions for problems under the difficult conditions Luke specifies in the above quote. Let’s see an example:

For example: imagine if you’re trying to find an optimal set of robot behaviors for a soccer goalie robot. You have a simulator for the robot and can test any given robot behavior set and assign it a quality (you know a good one when you see it). And you’ve come up with a definition for what robot behavior sets look like in general. But you have no idea what the optimal behavior set is, nor even how to go about finding it.

The simplest thing you could do in this situation is Random Search: just try random behavior sets as long as you have time, and return the best one you discovered. But before you give up and start doing random search, consider the following alternative, known as Hill-Climbing. Start with a random behavior set. Then make a small, random modification to it and try the new version. If the new version is better, throw the old one away. Else throw the new version away. Now make another small, random modification to your current version (which ever one you didn’t throw away). If this newest version is better, throw away your current version, else throw away the newest version. Repeat as long as you can.

Hill-climbing is a simple metaheuristic algorithm. It exploits a heuristic belief about your space of candidate solutions which is usually true for many problems: that similar solutions tend to behave similarly (and tend to have similar quality), so small modifications will generally result in small, well-behaved changes in quality, allowing us to “climb the hill” of quality up to good solutions. This heuristic belief is one of the central defining features of metaheuristics: indeed, nearly all metaheuristics are essentially elaborate combinations of hill-climbing and random search.

One must use caution. When Clausewitz uses a metaphor, he does so because it helps us understand some dimension of the problem being discussed — not because a Center of Gravity in war maps exactly onto the meaning of the Center of Gravity in physics. Boyd does not make this distinction, and thus is vulnerable to criticisms from those that accurately point out that his interpretation of scientific concepts do not match their original usage. The level of abstraction I am discussing with in this post must be qualified in this respect, as I hope to avoid repeating Boyd’s mistake.

However, the following aspects of metaheuristics are still appealing in abstract. In many real-world problems, we do not know what an optimal solution looks like. We don’t know how to find it. We have a nub of information we can use, but not much more. Most importantly, the space of possible solutions is too large for us to just use brute force search for an answer:

A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard, and, for each arrangement, check whether each (queen) piece can attack any other.

While hill-climbing and random-search are inherent in most metaheuristics, there are different types of metaheuristic algorithms for different problems with varying performance in climbing the “hill of quality.” Hence it is customizable and recognizes variation in performance of methods. Some methods will perform well on some problems, but will get stuck at a local optima instead of a peak when faced with others.

One gigantic caveat: the idea of peaks and valleys in the solution space is derived from the assumption of a static, not dynamically evolving, landscape of candidate solutions. A perfect example is the application of the Ant Colony Optimization method to the notoriously hard Traveling Salesman Problem or the use of genetic algorithms to optimize the Starcraft tech tree’s build orders. When the solution space you are searching and climbing evolves in time, algorithms that assume a static landscape run into problems.

However, this is also why (in more mathematically dense language) nailing down principles of war is so perilous.  A solution that you might have used a principle of war to get to is  fine at time T. But it loses validity as we shift to T+1 and tick upwards towards T+k. And should you use a principle that better fits war’s grammar in 1830 than 2013, then you are even more screwed.

The advantage of metaheuristics of war compared to principles of war is that, while both consider solutions to problems with discrete (not continuously shifting) solution landscapes, metaheuristics are about how you find solutions. Hill-climbing is (oversimplified) method of moving through solutions that exploits heuristic information, and random search is  (also oversimplified) “try and see what happens.” The process of a metaheuristic involves a combination of both.

In contrast, principles of war are not really a process as much as a set of general guidelines designed to dramatically and a priori shrink the possible space of solutions to be considered in ways far more sweeping than hill-climbing. They imply a very, very restricted set of solutions while still being too vague to help a practitioner think about how the solutions fit the problem. Principles of war generally say to the practitioner, “generally, you do ___ but how you apply this is up to your specific situation and needs.” It has a broad set of do’s and don’ts that — by definition — foreclose consideration of possible solutions when they conflict with a given principle.

Yes, they are suggestions not guidelines, but the burden of proof is on the principle-violating solution, not the principle of war.  It may be that many problems will require flagrantly violating a given principle. The Royal Navy’s idea about distributing its forces to deal with the strategic problem posed by early 20th century imperial geopolitics potentially runs afoul of several principles of war, but it still worked. Finally, many principles of war as incorporated in military instruction are shaped more by cultural bias than timeless warfighting ideas.

As noted previously, metaheuristic algorithms are flexible. Different metaheuristics can be specified for differing problems. Additionally, when we consider past military problems (which the didactic teaching of principles of war concerns), metaheuristics can serve as alternative method for thinking about canonical historical military problems. Algorithms are measured against benchmark problems. One can consider abstract “benchmark” military problems and more specific classes of problems. By doing so, we may shed light on conditions impacting the usefulness of various principles of war on various problems of interest.

I will stress again that the loose notion of metaheuristics of war and the the principles of war should be complementary, not an either-or. And they can be combined with methods that are more interpretive and frame-based, since you will not be able to use a metaheuristic without having the “I know it when I see it” understanding Luke referenced in the beginning of his quoted text. On a similar note, I’ll also stress that an algorithm makes up only one part of a software program’s design pattern. A strategy or strategic concept is a larger architecture (e.g. a “strategy bridge“) that cannot simply be reduced to some narrow subcomponent — which is how the principles of war have always been understood within the context of strategic thought.

That being said…….what about war in real time, the dynamic and nonlinear contest of wills that Clausewitz describes? Note the distinction between the idea of principles of war that reasonably explain a past collection of military problems/offer guidance to understanding reasonably well-known military problems and the conceptual ability to understand the underlying dynamics of a specific present or future military contest.

The principle of objective, unity of command, or mass will not tell you much about the context of the strategic dilemma Robert E. Lee faced as a Confederate commander because geography, technology, ideology, state policy, the choices of neutral states, etc all structured his decision. They are much better when applied to the general class of problem that Lee’s dilemma could be abstracted into.

This is the difference between Clausewitz’s “ideal” and “real” war. Ideal war lacks the constraints and context of real war, and real war is something more than the sum of its parts. For example, maneuverists often argued that the US should implement an German-style elastic defense to defeat the Soviets in Central Europe. But such an solution, while perhaps valid in the abstract, would not be tolerated by European coalition partners that sought to avoid another spate of WWII-like demolition of their homelands.

Principles of war tell us very little about the Trinity’s notion of passion, reason, and chance, or the very political, economic, geographic, and technological conditions that might allow us to understand how Clausewitz’s two interacting “duel” partners move from the start of the match to conclusion. We need to think about how the duel plays out in time. And for reasons already explained metaheuristics also have some big limitations of their own with respect to dynamically evolving solution spaces.

An entirely different set of conceptual tools is needed, but that’s a problem for another post. For now, I leave you with a NetLogo implementation of Particle Swarm Optimization. Look at those NetLogo turtles go!!


Switch to our mobile site