WikiLeaks: Critical Foreign Dependencies
[ by Charles Cameron ]
The most interesting part of the WikiLeaks-posted State Department Request for Information: Critical Foreign Dependencies, it seems to me, is the part that ties in with Zen’s recent post Simplification for Strategic Leverage.
Zen referenced Eric Berlow‘s recent TED talk to the effect that sometimes a complex network can be made effectively simple by reducing it to the graph of nodes and links within one, two or three degrees of the node you care about and wish to influence.
“Simplicity often lies on the other side of complexity”, Dr Berlow says, and “The more you step back, embrace complexity, the better chance you have of finding simple answers, and it’s often different than the simple answer that you started with.”
*
This resonates neatly with a few things I’ve been thinking and talking about for some time now.
1. There’s the need for visualization tools that don’t operate with as many nodes as there are data points in a database like Starlight — I’ve been wanting to reduce the conceptual “load” that analysts or journos face from thousands, sometimes tens of thousands of nodes, to the five, seven, maybe ten or twelve nodes that the human mind can comfortably work with:
What I’m aiming for is a way of presenting the conflicting human feelings and understandings present in a single individual, or regarding a given topic in a small group, in a conceptual map format, with few enough nodes that the human mind can fairly easily see the major parallelisms and disjunctions, as an alternative to the linear format, always driving to its conclusion, that the white paper represents. Not as big as a book, therefore, let alone as vast as an enormous database that requires complex software like Starlight to graphically represent it, and not solely quantitative… but something you could sketch out on a napkin, showing nodes and connections, in a way that would be easily grasped and get some of the human and contextual side of an issue across.
2. There’s the fact that the cause is typically non-obvious from the effect. In the words of Jay Forrester, the father of stocks and flows modeling:
From all normal personal experience, one learns that cause and effect are closely related in time and space. A difficulty or failure of the simple system is observed at once. The cause is obvious and immediately precedes the consequence. But in complex systems, all of these facts become fallacies. Cause and effect are not related in either time or space… the complex system is far more devious and diabolical than merely being different from the simple systems with which we have experience. Though it is truly different, it appears to be the same. In a situation where coincident symptoms appear to be causes, a person acts to dispel the symptoms. But the underlying causes remain. The treatment is either ineffective or actually detrimental. With a high degree of confidence we can say that the intuitive solutions to the problems of complex social systems will be wrong most of the time.
3. There’s the need to map the critical dependencies of the world, which became glaringly obvious to me when we were trying to figure out the likely ripple effects that a major Y2K rollover glitch – or panic – might cause.
Don Beck of the National Values Center / Spiral Dynamics Group captured the possibility nicely when he characterized Y2K as “like a lightening bolt: when it strikes and lights up the sky, we will see the contours of our social systems.” Well, the lightning struck and failed to strike, a team from the Mitre Corporation produced a voluminous report on what the material and social connectivity of the world boded in case of significant Y2K computer failures, we got our first major glimpse of the world weave, and very few of the possible cascading effects actually came to pass.
I still think there was a great deal to be gleaned there — as I’m quoted as saying here, I’m of the opinion that: “a Y2K lessons learned might be a very valuable project, and even more that we could benefit from some sort of grand map of global interdependencies” – and agree with Tom Barnett, who wrote in The Pentagon’s New Map:
Whether Y2K turned out to be nothing or a complete disaster was less important, research-wise, than the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age.
4. That such a mapping will necessarily criss-cross back and forth across the so-called cartesian divide between body & mind (materiel and morale, wars and rumors of wars, banks and panics):
You will find I favor quotes and anecdotes as nodes in my personal style of mapping — which lacks the benefits of quantitative modeling, the precision with which feedback loops can be tracked, but more than compensates in my view, since it includes emotion, human identification, tone of voice.
The grand map I envision skitters across the so-styled “Cartesian divide” between mind and brain. It is not and cannot be limited to the “external” world, it is not and cannot be limited to the quantifiable, it locates powerful tugs on behavior within imagination and powerful tugs on vision within hard, solid fact.
Doubts in the mind and runs on the market may correlate closely across the divide, and we ignore the impacts of hope, fear, anger and insight at our peril.
*
Getting back to the now celebrated WikiLeak, which even al-Qaida has noticed, here’s the bit — it’s really just an aside –that fascinates me:
Although they are important issues, Department is not/not seeking information at this time on second-order effects (e.g., public morale and confidence, and interdependency effects that might cascade from a disruption).
It seems to me that the complex models which Starlight provides, and Eric Berlow pillories, overshoot on one side of the problem – but avoiding all second-order effects?
One cause, one effect, no unintended consequences?
What was it that Dr Berlow just said? “if you focus only on that link, and then you black box the rest, it’s actually less predictable than if you step back, consider the entire system”…
Avoid all second-order effects?
If you ask me, that’s overshooting on the other side.
December 9th, 2010 at 12:11 pm
I wonder about the best number of nodes, Charles. Perhaps the Dunbar number is a good one to shoot for, since we already are used to that size of human network in our lives.
December 9th, 2010 at 3:44 pm
Hi Bryan:
.
We can apparently "carry" the Dunbar number of entities in memory, but when we’re looking at a map, it’s the Magical Number Seven, Plus or Minus Two we can work with — so for my "analysis-on-a-napkin" purposes, a half dozen nodes simultaneously considered, with a half dozen more that the eye can slide to seems right — and then a Dunbar map with its 150 nodes might be best for the equivalent of a "book length" treatment.
.
Starlight is about right for a reference work, an encyclopedia — but human insight requires some paring down of the dots as well as making the connections between them.
December 9th, 2010 at 4:18 pm
Coincidentally, before reading this, last night I was thinking about the problem of a legal system in which thousands of laws, mostly unknown (at least in their specifics, if not entirely unknown) can be leveraged against members of the society at the whim of prosecutors etc. who have much leeway on application & prosecution.
.
Then, I thought about the Bill of Rights. There were very good arguments against the inclusion of the Bill of Rights with which I still agree; and, some decent arguments for its inclusion. Beyond the "absolute protection of the bare necessities", BOR has a more practical use in providing the populace with a smaller framework/nodes they are able to weigh against any of their own actions and actions committed by government entities and others.
December 9th, 2010 at 5:16 pm
What a neat insight!
.
Now that you mention it, I guess the Seven plus Three Commandments could be seen as serving something of the same function for the 613 mitzvot of the Toah.
December 9th, 2010 at 6:39 pm
Curtis,
"can be leveraged against members of the society at the whim of prosecutors etc. who have much leeway on application & prosecution."
.
I know this is a tangent, isn’t this what prosecutors are increasingly doing in the U.S. today? We are also seeing this on the other side of the bar as well with lawyers using increasingly arcane & unknown laws to shake down companies & individuals.
.
Radley Balko has written extensively (specifically about asset forfeiture laws) about the former and I believe overlawyered.com is a good source about the later.
.
Regards,
TDL
December 9th, 2010 at 7:16 pm
Charles, another point about cause and effect.
.
It seems to me that these are all overdetermined systems. That is, there are multiple causes that input to a single outcome. There is some winnowing down of causes by decision-makers (decreasing the numbers of nodes), but they also have to balance the various factors toward particular sorts of outcomes.
.
That sounds a little like I’m saying they skew things toward their politics, and that may be part of it, but what I’m saying is broader than that.
.
There are always multiple influences on a single outcome, frequently more than one of which is likely to lead to that outcome. That might be one way of grouping nodes to come up with smaller numbers of them. The problem with that approach is that the nodes will group in different ways for different outcomes/circumstances.
.
I hope I’ve made my point sufficiently obscure.
December 9th, 2010 at 11:23 pm
"I hope I’ve made my point sufficiently obscure" Yes, thanks. It goes with what I have been thinking lately. If I got the numbers right the number 5 has 120 ways of connecting, which is close to the Dunbar number 150, so these connections will be strong instead of weak connections. You may be able to hedge plus or minus 2 so that supports what Charles said about the magic number 7. So 5 is exactly as complicated as 120, and if you turn them into implicit images, or in other words metafors, which it looks like something Starlite is already doing, then what is the problem? I imagine the problem is what Howard Bloom outlines in his book Global Brain. We are really talking about domains, and, as he says, a human being is only able to master 3 domains in his/her life time. So the magic number is too many, it should be the equivalent of 3 in computer years. Of the domains to master, Bloom suggests physics, logic and ethics would be a good start for a philosopher, so we could start there. If you are using three domains to define a puzzle, physics defines it structure, the logic its command and control, and ethic defines its internal forces, which are perpendicular to each other. I think Plato, with the physics of Pythagoras, defined them as gold and silver and the resultant structure force as platinum. This is probably wrong, because I have not read Plato, it is just a guess of an example.
December 11th, 2010 at 3:42 am
Hi Cheryl,
.
"There are always multiple influences on a single outcome, frequently more than one of which is likely to lead to that outcome. That might be one way of grouping nodes to come up with smaller numbers of them. The problem with that approach is that the nodes will group in different ways for different outcomes/circumstances."
.
An IC guy made the same point to me, albeit worded differently, in an email. This approach is like a lens – it reveals data in a certain way that can be informative but not comprehensive.
December 11th, 2010 at 7:41 am
I’m reading David Hackett Fischer’s 1996 book The Great Wave on secular (as in the opposite of cyclical) price trends over the past millenium in Europe. Over that period prices tend to vary between long periods of equilibrium and long periods of inflation. The periods of inflation usually correspond to crises like the Crisis of the Late Middle Ages, the General Crisis of the Seventeenth Century, and the great Crisis of the Twentieth Century. One aspect of these secular waves seems to be the disintegration of the narrative that dominates an equilibrium in the face of change. Following Joseph Tainter, complexity may accumulate during a period of equilibrium that eventually bursts what we might call the general unifying strategic consensus of an equilibrium. One expression of increased complexity is the increasingly tactical nature of everyday existence manifested as the further division of/specialization of labor. This increasingly tactical focus is a major factor in strategic fissures as humans lose track of the big picture that has predominated in favor of minutia and trivia. The end result is Tainter’s diminishing returns on complexity, people opt out, and general systemic disaster. Some complex societies collapse but others find a new big picture narrative and find a new equilibrium. Nassim Nicholas Taleb has identified the central problem of human though as being the "narrative fallacy". Humans have certain evolved biases and accumulated assumptions that act to compress the world around them in strange and unpredictable ways. In many ways this is the way the mind and society as a whole stumbles to the your "five, seven, maybe ten or twelve nodes that the human mind can comfortably work with". The historical process seems to be the inflation of a greater and greater number of such nodes until it overwhelms the ability of the prevailing general strategy of a complex strategy to reduce them to a comprehensible form. At such points, a complex society will either collapse under the cumulative weight of nodes of information it can no longer collectively or individually process or it will stumble onto a general strategic narrative framework that will reduce the conditions of crisis back to the seven points of information that can be held in the mind at once. If it clears a complexity choke point, a complex society settles back into equilibrium, at least for a time. A strategy of simplifying models is probably a prerequisite for any significantly complex society.
December 11th, 2010 at 8:41 pm
I’ve been having conversations on and off with a bunch of people interested in modeling the political, military, economic, social, information, and infrastructure variables (PMESII) in a society or conflict, using many different approaches, and it has convinced me of just how enormously varied the levels at which people want to be able to model complex structures are, in scale, scope and approach.
.
That’s the background I’m coming from in responding to Cheryl‘s post and those following, so I’d like to explain a bit about my own interest here.
.
*
.
So as to be clear – I want to claim a very small corner of that PMESII turf, far from the high tech wizards of database, and closer to the worlds of New Yorker / Atlantic / Harpers style journalism, using anecdote and quote as my means of reducing the number of nodes while endowing them with a richer humanly-appreciable freight of meaning.
.
The "intelligent apparatus" in my own approach is therefore the heart and mind of the analyst, not some piece of software, and that’s why in my "Hipbone Approach" series I’ve been emphasizing the importance of a wide and appreciative reading in the arts and humanities for analysts – and focusing on quotes and anecdotes in parallel (ideas which rhyme, contrapuntal ideas, graphic matches, symmetries, homologies).
.
I’ve been following this line of thought for years. In an online chat session with David Gelernter years ago, I said:
to which he responded:
From my POV, the human mind recognizing a rich correspondence between two rich insights, perhaps even from widely separate domains, is the very essence of creativity — isn’t that what the Taniyama-Shimura conjecture – and thus the eventual proof of Fermat’s last theorem – was all about?
.
*
.
And if that’s the brightest style of thought and understanding our human minds are capable of, shouldn’t we be using that approach in our thinking about complex systems and their dynamics? complex social problems? geopolitics? terrorism?
December 11th, 2010 at 9:04 pm
One of the things that talking with the PMESII folks has convinced me of is how easy it is for us to hold very different conversations while using many of the same terms — perhaps without the participants knowing they are holding different conversations. Or perhaps I should rephrase that, and say it’s only too easy for us to hold a single conversation in which different strands of thought are braided together without anyone noticing that they are still entirely separate.
.
We can only too easily "talk past one another" when our subject is as ill-defined as the proverbial elephant – and thus suffering from the troubles attendant on early exploration from a number of different vantage points.
.
So when Cheryl says:
I’m wondering (a) what sort of level of modeling / mapping she is thinking of, involved with, and perhaps using professionally, (b) what sort of mapping those to whom she would like to present her findings and insights can in practice grasp, (c) whether there’s an inherent need to "prune the nodes" in that situation, and (d) if so whether the pruning is best done by analyst or client – or more broadly, the "intelligence" or "decision" function.
.
If I’m understanding Cheryl, that’s a significant part of the issue her comment is addressing.
.
*
.
For a quick glimpse of the "domain" of the PMESII discussions, see David Hartley’s presentation for the DIME/PMESII Community of Interest.
December 11th, 2010 at 9:18 pm
Larry:
.
I’m not sure that you can conflate numbers of nodes and numbers of edges quite that easily, though there’s a strange sense in which ever link (at least from a Hipbone perspective) can be viewed as a node…
.
If I had to recommend three domains, though, they’d be radiance, duende and finesse – where radiance is the open hearted, questioning and generous quality of what zen calls "original face", duende is the inexpressible, spontaneous, almost poetic courage that comes from the raven on one’s shoulder in flamenco, and finesse is the kind of mastery that allows one to play most assiduously –- studiossime ludere, as Ficino’s motto has it.
.
Mastery (finesse) in any domain teaches what mastery is, and it is then more easily found or attained in any other; Fearlessness (duende) allows one to approach any domain without pride or prudishness; and deep generosity (radiance) plays to the healing of wounds, not to the exacerbation of conflict…
December 11th, 2010 at 9:40 pm
M. Fouche introduces a theme from Joseph Tainter, and concludes:
That makes the whole issue all the more urgent for us, I think – and my hope is that we can come up with something better and more complex than the linearity of the sound bite or even the white paper, and yet more richly "human" and less mechanistic than the various forms of purely quantitative technical modeling – while remaining within the grasp of both those who wish to understand, and those to whom they must explain their understandings…
December 11th, 2010 at 9:42 pm
Charles, Insightful post; I’ve had a casual interest in the notion modeling complexity since the Grey Balloons exercise earlier this year. I maintain that there must be a method of gaining insight into to the "noise" that is, post-attack, transformed into "sense-making." After reading your post above, I decided to use the traditional (I believe it is traditional) motive, means, opportunity notion as a schema for potential sense-making ; but rather than look outward I turned the words inward, and began to explore synonymous relationships, and I believe, on the surface, there are probably patterns…just curious how much sense would result if this web of synonymous relationships filtered/contextualized data. There are, no doubt, reappearing themes and means…patterns.
.
Your post bumps into what I’ve been doing off-and-on all day; deconstructing the notion of "community" and "cultural glue" that makes it possible.
December 12th, 2010 at 1:11 am
Thanks, Scott.
.
I’ve just put up an ironic post illustrating the perils of simplistic network-mapping here on ZP.
December 15th, 2010 at 2:18 am
Taking a leap here, Charles, but consider the patterns which occur in organizations and groups. Groups have established patterns of communications and vocabulary which are unique; from gangs using ebonics, to military guys using acronyms and Spartan phraseology—-every group will establish norms for communicating; a Catholic bishop would use a vocabulary different and distinct from a hedge fund manager—even while both speak English. I’m guessing we have enough data to build data patterns with respect to "how" these groups communicate—there must be discernible patterns; the human brain is lazy, in constant search for efficiency (one reason, I’m guessing that groups develop distinct patterns/vocabularies in their communications), so why wouldn’t some form of probabilistic(?) genetic algorithm work to make sense of the noise and the patterns? Attack the patterns using the three node schema above as a start—but the pattern(s) will probably reveal nodes/schemas unique to the circumstance that are superior. Does this make sense? While the world is complex and our systems may be complex adaptive and all that, the enemy is human and communications (particularly verbal) in the long term must follow some sort of pattern—for they must have a schema that conveys meaning…
December 15th, 2010 at 3:45 am
"I’m guessing we have enough data to build data patterns with respect to "how" these groups communicate—there must be discernible patterns; the human brain is lazy, in constant search for efficiency (one reason, I’m guessing that groups develop distinct patterns/vocabularies in their communications), so why wouldn’t some form of probabilistic(?) genetic algorithm work to make sense of the noise and the patterns"
.
Brilliant point, Scott!
.
I would guess that as a loose group coalesces into a "hard" entity capable of first coordinated political action and then justification of violence to operational planning, the shared in-group vocabulary escalates orders of magnitude of complexity until outsiders cannot really follow the meaning of "insiders" without some kind of initiation into the "secrets". Much like an ancient mystery religion, modern cult or "vanguard" movement.
.
By counting word/phrase proliferation, as well as looking at meanings/context, we can gauge how close/fast a militant group is moving toward self-isolation and violence
December 16th, 2010 at 8:33 pm
Hi Scott:
.
I think you have the germ of an extremely neat idea there. I’ve been busy taking care of my boys the last few days while my [ex] wife and [her] more recent spouse were out of town, or I’d have commented sooner.
.
[bracketed comments added later to avoid confusion]
.
1
.
Here’s the thing. As I see it, there are many people on the "tech" side of modern life who are thinking up, developing and applying tech-based approaches to mapping and understanding complex issues in general and terrorism in particular, all of them to some extent quantitative (at least under the hood) — and because each of these tech approaches requires teams and expensive development and deployment costs and such, there are also staff people to interact with DARPA and file RFI responses, etc. And there’s funding available, which means there are interested constituencies, lobbies, etc.
.
That’s the contemporary trend as i see it, and it means that Beltway bandits and boutique shops can explore a dazzling array of tech approaches, on time scales from the speed of a drone operator or pilot’s reflexes to the tactical to the strategic to … or approaching … statesmanlike wisdom in retrospect. And on unit scales that range from the company to the nation, from the lone-wolf or cell to the insurgent franchise, etc.
.
In each of these cases, the human ingenuity and brilliance goes mainly into the insights that build the software — the software then "suggests" relationships for the analyst to ponder — relationships that are as nuanced (and thus "accurate") as the algorithms themselves will allow.
.
2
.
I am (personally) coming at this from a completely different angle.
.
I want to know what happens if we cut out the "middle-machine" (computer intelligence) entirely, and work with human intelligence using the sorts of data that humans process most richly — story, aphorism — and what I take to be humanity’s most powerful means of coming to grips with things — analogy, metaphor, recognition of similarity and pattern…
.
My response to a DARPA RFI that hasn’t even been written, and maybe never will be, would be to suggest putting together a team of say 100 bright, non-linear (ie associative) thinkers, drawn 66/33 % from the humanities/sciences, widely read, traveled and experienced, so each has a rich personal pattern inventory to draw upon, almost certainly not working under security clearance, drawing on open sources rather than secrets (aware of the limitations and benefits), and suggesting what seem to them to be significant correlations between "quotes and anecdotes" — from white papers, news media, cultural anthropology, depth psychology, the arts and sciences…
.
And I’d use napkin-size node-and-link graphs to help them do it, make a game of it, see it as a playful challenge to intelligence.
.
Since napkins are cheaper and easier to come by at Starbucks than cappuccinos, though, this project won’t have a senior vice-president at Boeing or MITRE sponsoring it any time soon — but it’s the counter-intuitive option, and it’s the one that lets the superior neuronal fire-power of the alert human brain do the "recognition".
.
3
.
So I applaud your approach, Scott — but my personal predilection is for human wit as the crucial "spark mechanism" that sets more linear minds on track — to develop the implications of insights that their less linear-minded colleagues propose, sort or verify them, explore and expand on them, summarize them, deliver them upwards.
.
Otherwise, I submit, our Intelligence (capital I) is working with only one hemisphere of its (potential) brain…
.
Small print: all points made in this comment have been exaggerated for maximal effect, while also being understated with traditional British reticence.
December 16th, 2010 at 9:07 pm
Scott:
.
You mention genetic algorithms, btw. I’ve a curious cross-disciplinary link for you here…
.
John Holland, widely regarded as the "father" of the genetic algorithm, talks about the Glass Bead Game from Hermann Hesse’s Nobel-winning novel Magister Ludi in his interview in Omni magazine — going so far as to say:
Holland is not alone in using Herman Hesse’s "game" as a matrix for brilliant original thinking, incidentally — Christopher Alexander’s Pattern Language and Manfred Eigen’s The Laws of the Game can both be viewed as instantiations of Hesse’s idea.
.
But the neat link here is that my own HipBone Games, from which my entire analytical approach derives, is my attempt to make Hesse’s game playable — on a napkin, solo, or with friends, and preferably over a decent cup of latte…
.
So we may be playing opposite sides of the "Two Cultures" divide, Scott — but Hesse’s game makes a terrific bridge between my napkins and your genetic algorithms.
December 16th, 2010 at 9:15 pm
And hey, Scott — grin — since we’re already talking John Holland here, I just ran across this quote from Holland on the significance of metaphor:
That, in a nutshell, is why I’d use metaphor and analogy as my primary analytic device.