zenpundit.com » 2013

Archive for the ‘2013’ Category

A Shakespearean concept? Merry Merry!

Wednesday, December 25th, 2013

[ by Charles Cameron — to all our readers, with seasons greetings from all of us at Zenpundit ]
.

The great French director Jean Renoir decided to cast himself as Octave in his film The Rules of the Game, widely considered to be among the greatest works in that medium. The effect was liberating — Renoir himself, playing Octave, has a greater knowledge of the director’s wishes than any other actor in that remarkable film, and this gives him a joyous freedom and spontaneity that delights us, his audience.

Jean Renoir, left, as Octave, in his film Rules of the Game

And all the world’s a stage, eh? with every film a play within the greater Play?

Nativity scene, from Jean Renoir's film, Grand Illusion

Is it too much to suppose that a director of Worlds, having seen Rules of the Game, might decide to try the same trick — setting aside the director’s chair to play the role of a small child, born homeless in some obscure corner of a minor galaxy?

**

Wishing all at Zenpundit bright Zen, decent punditry, and a merry Christmas…

America’s Defense Amnesia

Friday, November 1st, 2013

(by Adam Elkus)

Over at The National Interest, Paul Pillar diagnoses America with an “amnesia” about intelligence. The US, like Guy Pearce’s amnesiac character in Memento, does not perceive that it is caught in a larger oscillating cycle:

Attitudes of the American public and elected officials toward intelligence go in cycles. There is an oscillation between two types of perceived crisis. One type is the “intelligence failure,” in which things happen in the world followed by recriminations about how intelligence agencies should have done a better job of predicting or warning of the happening. The recriminations are customarily accompanied by “reform,” or talk of it, which chiefly means finding ways to do things differently from what was done before—not necessarily better, just different. Usually there also are accusations of malfeasance by individuals, even though there is an inherent tension between attributing failure to unreformed institutions and attributing it to individuals who screwed up. Often the response also involves additional empowerment of institutions, in the form of added resources or added authorities.

The other type of crisis involves seeing institutions as too empowered, with the response being to place additional restrictions on them. For U.S. intelligence agencies one of the most conspicuous examples of this phase of the cycle was in the 1970s, with some of the agencies in question already suspect as the nation came out of the Vietnam and Watergate eras, and with the principal response being to erect Congressional and legal checks that are still in place today. Now we are seeing in a somewhat milder form the corresponding phase of another cycle, as the nation comes out of more than a decade of recovery from the 9/11 terrorist attacks, which stimulated the most recent burst of empowerment. There is new talk about reducing the powers and scope of activity of agencies and adding more checks and restraints.

Pillar goes on to explain that the nature of intelligence does not provide easy directions regarding how allied intelligence targets figure into larger geostrategic intelligence factors that impact what policymakers desire out of the intelligence community. It is a great read from a man who is both a veteran of the intelligence world and a consistent critic of US foreign policy and security. However, I’d like to expand Pillar’s metaphor of “amnesia” beyond the intelligence world. We really have defense and national security amnesia.

After the 2003 invasion of Iraq, it was not uncommon to hear sentiments arguing that force-on-force, firepower-centric conventional warfare could not cope with the challenges of a “global counterinsurgency.” Indeed, some argued that the previous high-tech military ideas not only were out of date with the nature of the challenge, but almost lost the war altogether. Both manpower-heavy and manpower-light counterinsurgency campaigns were proposed.  The Surge is still seen today in many quarters as the closest thing America has to a recent military triumph. As Antulio Echevarria noted, critics of conventional warfare argued that opponents had adapted around America’s strategic advantages, but it was less clear that there was any causal relationship.

Circa 2007-2009, however, large-scale occupations in the Muslim world began to go out of style. Critics began to clamor for a light footprint approach heavily based around counterterrorism strike forces and standoff firepower. A presidential candidate promised to hit al-Qaeda hard with flexible counterterrorism forces. Reduce the terrorist threat steadily growing in safe havens, he and his staff argued. The zeitgeist began to turn towards a culture of raiding, characterized by some of the very same assumptions about light and lethal forces that were so widely criticized prior to the counterinsurgency era. Manpower-intensive occupations were out, intensive counterterrorism in the dark was in. Instead of stabilizing failed states, America would use a combination of intelligence, special operations, and statecraft to marginalize and undermine al-Qaeda.

The age of “dirty war”  became a lightning rod for criticism. But one of the most trenchant criticisms was that an obsession with tactical counterterrorism intelligence was harming America’s intelligence agencies’ traditional specialties in strategic intelligence and counterintelligence. The line between military and intelligence was being “blurred.” The larger cost? Focusing so much on short-term, tangible, and easily justifiable counterterrorism intel requirements blinded America to the larger picture that it needed to see. As a result, it would be perpetually surprised by events like the Arab Spring.

In light of today’s furor over spying on allies, it is worth examining how this line of argument cast the difference between strategic intelligence and strike intelligence as a military-industrial complex analog of the classic dichotomy between basic and applied scientific research. Basic scientific research is often difficulty to justify in the short term, and frequently does not result in immediate payoff. But none of today’s scientific discoveries would have been possible without it. Hence, as Pillar noted in his essay, in retrospect it is easy to see “failures of intelligence” in areas where ambiguity regarding the purpose of intelligence, targets, and immediate payoff motivated hesitation. Ironically, as Dan Trombly tweeted, most of the intelligence community’s “counterterrorism obsession” critics were silent (with the notable exception of Joshua Foust) when evidence accrued that foreign spying was conducted for non-counterterrorism purposes.

Returning to Pillar’s opening metaphor, it seems that the American defense and foreign policy community is suffering from a collective case of amnesia. A call for counterterrorism, light footprints, and intelligence leads to an intelligence architecture that supports a raiding posture, and is then promptly and widely criticized for focusing so intensely on counterterrorism. A call for counterinsurgency results in substantial investment in counterinsurgency abilities, and then is promptly and widely criticized for its time and expense.

My analysis is undeniably unfair in some ways. First, the aggregated commentary of the DC defense commetariat consensus as presented here smoothes out meaningful differences, nuances, caveats, and variations. It was not as simple as I make it out to be, but the consensus of a community is not easily described in a single paragraph. Second, each idea also produced data that was (fairly or unfairly) evaluated. Counterinsurgency theory looked very appealing to many analysts in 2006 but was pronounced dead by war-weary Americans in 2011. Compared to Iraqi and Afghan quagmires, drones and special ops seemed compelling . But as the wars drew down and more press attention focused on the ramped-up counterterrorism campaigns, analysts began to have substantial misgivings.

That said, the problem is that while the world certainly changes fast, it has not changed fast enough to justify the kind of analytical mood swings that have frequently occurred since the beginning of the COIN era. If one took the last 12 years of national security commentary as gospel, they would believe that some seismic, worldview-invalidating event occurred every 1-3 years and necessitated a wholesale rejection of the policy the previous worldview-invalidating event spawned. Events have complicated and qualified—but not wholly invalidated–the merits and demerits of COIN, special operations and counterterrorism, and strategic intelligence (which includes spying on allies). While all of the arguments I’ve summarized here contradict each other, I can’t say with confidence that any of them are completely wrong.

The problem with America’s defense amnesia is not “be careful what you wish for.” No one can know exactly how their policy preference will work out. It is not even “remember what you wish for.” Rather, the lesson is to keep in mind that however fast events may move, there are larger and systemic factors and tradeoffs that stimulate day-to-day policy problems. These systemic factors change very slowly, and remain fairly consistent across administrations. Why we cannot comfortably dismiss any of the varying defense memes I’ve cataloged is that each dealt with a segment of a larger problem.

Being conscious of the unchanging challenges of American national security, from the difficulties of maintaining local outposts of American hegemony to how America’s national position produces incentives for perpetual war, has important intellectual benefits. We can avoid calls for dramatic course correction over hysterias of the moment and keep the longer term in mind. And we gain an appreciation for what has changed and what remains the same. A wider view tells us that war is not more complex, the calculus of strategic intelligence is not simple, and there are costs to both counterinsurgency and standoff counterterrorism that must be evaluated.

Moreover, we gain a greater respect for the policymakers who must deal with underlying manifestations of deeper and systemic problems instead of behaving (as even I sometimes do) like we have cracked some secret code unavailable to the idiots in Washington. There is some truth behind the disdainful phrase “good enough for government work.” But if the national security and foreign policy problems that government tackles were as obvious or linear as today’s criticism often implies, would our policy demands oscillate as wildly as Pillar alleges? It seems that unless we start tattooing relevant names, events, and information on our bodies (like Pearce’s Memento character does to help him remember), we won’t remember enough to answer that question. Such is the life of an amnesiac.

The Automatic State?

Tuesday, October 29th, 2013

(by Adam Elkus. I will be guest posting occasionally at Zenpundit. I am a PhD student in Computational Social Science at George Mason University, and a blogger at The Fair Jilt, CTOVision, Analyst One, and my own blog of Rethinking Security. I write a column for War on the Rocks, and I once was a blogger at Abu Muquwama. You can follow me on Twitter here. )

I’ve been looking at some recurring themes regarding technocracy, control, elites, governance in debates surrounding the politics of algorithms, drone warfare, the Affordable Healthcare Act, and big data‘s implications for surveillance and privacy. Strangely enough, I thought of James Burnham.

Paleoconservative writer Burnham’s scribblings about the dawn of a “managerial revolution” gave rise to conservative fears about a “managerial state,” governed by a technocratic elite that utilizes bureaucracy for the purpose of social engineering. In Burnham’s original vision (which predicted capitalism would be replaced by socialism), the dominant elites were “managers” that controlled the means of production. But other conservative political thinkers later expanded this concept to refer to an abstract class of technocratic elites that ruled a large, bureaucratic system.

Burnham had a different vision of dystopia than George Orwell, who envisioned a rigid tyranny held together by regimentation, discipline, pervasive surveillance, and propaganda. Rather, the managerial state was an entity that structured choice. The conception of power that Burnham and others envisioned issued from dominance of the most important industrial production mechanisms, and the bureaucratic power of the modern state to subtly engineer cultural and political outcomes. Building on Burnham and those he influenced, one potential information-age extension of the “managerial” theory is the idea of the “automatic state.”

Automatic state is a loose term that collects various isolated ideas about a polity in which important regulatory powers are performed by computational agents of varying intelligence. These beliefs eschew the industrial-era horror of a High Modernist apocalypse of regimentation, division of labor, social engineering, and pervasive surveillance. The underlying architecture of the automatic state, though, is a product of specific political and cultural assumptions that influence design. Though assumed to be neutral, the system automatically, continuously, and pervasively implements regulations and decision rules that seek to shape, guide, and otherwise manipulate social behavior.

Indeed, a recurring theme in some important political and social debates underway is that changes in technology allow a small group of technocrats to control society by structuring choices. The data signatures that all individuals generate and the online networks they participate is a source of power for both the corporate and government worlds. The biases of algorithms is a topic of growing interest. Some explicitly link unprecedented ability to collect, analyze, and exploit data with enhanced forms of violence. Others argue that the ability to record and track large masses of data will prop up authoritarian governments.  Activists regard the drone itself–and the specter of autonomous weapons–as a robotic symbol of imperialism.

While an automatic state may be plausible elsewhere, the top-down implications of Burnham’s technocracy does not fit America fairly well. Some of the most prominent users of the relevant automatic state technologies are corporations. While cognitive delegation to some kind of machine intelligence can be seen in everything from machine learning systems  to airplane autopilot functions, it would be a big stretch to say that the powerful algorithms deployed in Silicon Valley and Fort Meade serve a macro-level social regulatory function.

Certainly it is clear that mastery of computational intelligence’s commercial applications has made a new Californian commercial elite, but it is mostly not interested in governance. Faulty government information technology deployment of large-scale systems (as seen in the Obamacare debacle) also does not auger well for an American automatic state elite. However, some interesting — and troubling — possibilities present themselves at state, country, and municipal levels of  governance.

Cash-strapped state governments seeking more precise ways of extracting tax revenue for road projects are seeking to put a mile-tracking black box in every car. Drivers would be charged based on a pay-per-mile system, and government planners hope that it can better incentivize certain driving patterns. Tools like the black box may suggest the dawn of a new age of revenue extraction enabled by cheap, precise, and persistent surveillance. Why not, say, utilize a black box which (in the manner of a traffic camera) automatically fines the driver for going over the speed limit or violating a traffic regulation?

In contrast to Burnham’s vision of technocratic elites, those who benefit from these technologies are the same unwieldy group of local bureaucrats that Americans must inevitably put up with every time they drudge down to their local DMV. While this may seem innocuous, local government’s thirst for new revenue has led to disturbing practices like the drug war habit of asset forfeiture. Though legal, asset forfeiture has stimulated corruption and also incentivized constant drug raiding in order to secure more funds.

What technologically-enhanced  local governments may bring is the specter of automatic and pervasive enforcement of law. The oft-stated notion that millions of Americans break at least several laws every day suggests why automatic and pervasive enforcement of rules and regulations could be problematic. As hinted in the previous reference to asset forfeiture, it is not merely a question of a rash reaction to substantial fiscal problems that local political elites face.

Politics is a game of incentives, and it is also a question of collective action and cooperation. As many people noted in analysis of mayoral corruption in the District of Columbia, many local politicians often have little hope of advancing to higher levels of prominence. Thus, they have much less incentive to delay gratification in the hope that a clean image will help them one day become more important. They can either reward themselves while they have power, or forfeit the potential gains of public office. Second, the relative autonomy of state and local governments is possible due to the lack of a top-down coordination mechanism seen in other, more statist political systems. The decision horizon, of, say, a county police department is extremely limited. So it will be expected to advocate for itself, regardless of the overall effect. These mechanisms are worsened by the fiscal impact of government dysfunction, the decay of infrastructure, privatization, and the limited resources increasingly available to state and local governments.

This mismatch is somewhat understandable, given the context of Burnham’s original theory. His inspiration was the then-dominant corporatist models seen in 1930s Germany, the Soviet Union, Italy, and other centrally planned industrial giants. He also misunderstood the political importance of the New Deal, claiming it was a sign of American transformation to a managerial state. As Lynn Dumenil noted in her history of interwar America (and her own lectures I attended as an undergrad), the New Deal was not a complete break from Herbert Hoover’s own conception of political economy. Hoover envisioned a form of corporatist planning in which the biggest corporate interests would harmoniously cooperate regarding the most important political-economic issues of the day,with the government as facilitator. The technocratic corporatism implied by Hoover’s vision was Burnham-like, and the New Deal was a continuation of this model. It differed only in that it made the government the driver of industrial political economy instead of designer and facilitator.

However, sustainment of a New Deal-like corporatist model depends on elite agreement. This was not to last. George Packer, Chris Hayes,  and Peter Turchin have all noted that today’s American elites do not have the level of cohesion necessary to sustain a technocratic state. Instead, they are competing with each other in a zero-sum manner. Silicon Valley entrepreneurs have flirted with the idea of secession. The US government cannot pass a budget that funds the government for more than a few months. A “submerged state” of  sub rosa government regulations twists policy towards an affluent few and private interests. The notion that financial regulation was compromised by regulatory capture is not controversial. Finally, a normative conception of elite appropriateness is no longer shared.

What this all suggests is that the impact of an automatic state will be scattered and aggregate. It will be experienced in large part locally through revenue-extracting technologies open up hitherto untapped sources of advantage. Political rent-seeking, not social engineering is the byword. The mechanism of extracting rents, however, is very “managerial” in its operation. In my home state of California, overt attempts to increase revenue have been consistently thwarted by political resistance. The potential for automatic state technologies to become “political technology” that fixes this problem through much less obvious revenue extraction mechanisms is understandably very attractive. However, the ability to process a constant stream of data from automatic state technologies will be contingent on computational power available, which will vary contextually.

Where the automatic state becomes politically and culturally influencing beyond pure rent extraction is also an area where localism will likely matter more. Computational capabilities for automatic enforcement and subtle structuring of political choice is difficult to accomplish on a national level except on a fairly piecemeal way due to national political constraints. However, on a local level where one party or interest may have vastly less constraining influences, it is much more likely that a computational instantiation designed to structure cultural or political choices toward a preferred result could occur. Even without such partisan considerations, there is always a school district that acts to ban a student’s behavior that they dislike or a prosecutor seeking to ramrod a given result that would see such technology as a boon.

All of this isn’t to completely dismiss the potential for federal usage of these technologies. But, as seen in the NSA scandal, mass domestic surveillance in an environment where the public is not afraid of a 9/11-esque event occurring may not be politically sustainable in its current form. A patchwork of “Little Brothers” tied to a revenue extraction mission, however, is a far more diffuse and difficult political issue to organize around.

If the automatic state comes, it is not likely that it will come in the form of a Big Brother-like figure hooked up to a giant machine. Rather, it might very well be a small black box in your car that measures your mileage–and is so successful that it is soon modified to track your speed and compliance with traffic regulations.

Rasputin’s Apocalypse

Friday, August 23rd, 2013

[ by Charles Cameron — most likely a “foolish virgin” (Matthew 25) — I had no idea today was the day until today ]
.

.

According to Pravda, which I believe means Truth:

August 23, 2013 is the day, for which the infamous Grigory Rasputin predicted the end of the world in the beginning of the last century. Rasputin predicted a “terrible storm” in which fire would swallow all life on land, and then life would die on the whole planet. He also said that Jesus Christ would come down to Earth to comfort people in distress.

I would be remiss if I did not attempt to warn you…

Of dualities, contradictions and the nonduality II

Tuesday, July 30th, 2013

[ by Charles Cameron — notes towards a pattern language of conflict and conflict resolution: bridging divides in Baghdad 2013, Netherlands 1888 and the Germanies 1961 ]
.

I’ll be collecting examples of “dualities and the non-dual” here, because they give us a chance to consider the pattern that underlies “conflict and conflict resolution” and much else besides. This post picks up on an earlier post on the same topic: I’ll begin with three tweets that came across my bows this last week…

First, a vivid glimpse of sectarianism in today’s Iraq:

Second: sectarianism in the Netherlands, 1888:

And last, unexpected but charming, the divided Berlin of 1961:

It’s obvious once you think about it — thought we don’t always remember, such is the mind’s propensity to distinguish, divide, and argue from just one half of the whole — that human nature embraces both conflict and conflict resolution.


Switch to our mobile site