Open Source Boyd
John Robb posted the first part of a working paper that extends John Boyd’s Conceptual Spiral into Open Source environments. I want to draw attention to the third potential solution to catastrophic failure ( result of mismatch of rigid, hierarchical, bureaucracy with rapidly evolving, chaotic, environment) that Robb offers in his conclusion:
C) Decentralized decision making via a market mechanism or open source framework. This approach is similar to process “B” detailed above, except that a much wider degree of diversity of outlook/orientation within the contributing components is allowed/desired. The end result is a decision making process where multiple groups make contributions (new optimizations and models). As these contributions are tested against the environment, we will find that most of these contributions will fail. Those few that work are then widely copied/replicated within components. The biggest problem (opportunity?) with this approach is that its direction is emergent and it is not directed by a human being (the commander)
Some preliminary research in small worlds network theory indicates that very noisy environments will have emergent rule-sets. Human social systems are less tolerant of extended periods of chaos than are other kinds of systems because there are caloric and epidemiological “floors” for humanocentric environments that, if breached, result in massive population die-offs, emigration and radical social reordering. History’s classic example of this phenomena was the Black Death, which created a general labor shortage that fatally undermined European feudalism. Because of this, military forces whether of state orientation or irregulars would be forced to react cooperatively and adaptively, however indirectly, toward a consensus in order to maintain at least the minimal economic flows that permit their military operations to be sustained.
April 4th, 2008 at 5:12 am
Taken to its extreme, what is the difference between Robb’s C) and MS&G?
April 4th, 2008 at 10:44 am
Models/Sims/Games?
April 4th, 2008 at 1:28 pm
Ahhhh…..ppl are shorthanding their convo today when I had to rush out of the house without my coffee. I need some clarification:
.
Moon – what’s "MS&G"?
.
John- could you be more specific ? I can think of plenty of historical case studies that could serve but what direction did you mean "Models/Sims/Games" ?
April 4th, 2008 at 3:00 pm
I will ask a question in a complete sentence, since I am a Victorian person trapped in the 21st century.
What is “small worlds network theory”?
(Maybe this is what Mr. Robb is asking.)
April 4th, 2008 at 3:19 pm
Network theory grew out of complexity and chaos theories and it is of prime interest to physicists because network structures are far more prevalent in nature than anyone realized before circa 1996-1997. Linked by Barabasi was the first popular book on the subject
.
"Small worlds" refers to a kind of network where the nodes that are not directly connected can be reached via just a small number of other nodes, making small world models a good way of representing social networks, I have to offer up that I do not have the mathematical background to graph these things. My friend Von does a little bit with small worlds research here and there; or Valdis Krebs, who does this kind of graphing for a living. Either of them could give you a fuller explanation than could I.
April 4th, 2008 at 4:30 pm
Well, we learn something every day. According to Martin Shubik and Garry D. Brewer’s Questionnaire — Models, Computer Machine Simulations, Games and Studies, a mebbe interesting paper from RAND, MSG does indeed mean "models, simulations, or games".
April 4th, 2008 at 4:42 pm
Thank you Charles – I needed that. D’OH! I’m slow on the uptake today. 🙂
April 4th, 2008 at 5:07 pm
My apologies for the acronym, I should not have assumed that Modeling, Simulating, and Gaming was widely known. (I wanted to post at GG (GlobalGuerrillas) but had a problem with my TypeKey account, so posted with Zen in lieu.) Mr. Robb, I follow you in the A), B), C) that you laid out, but I wondered if the decentralized, survival of the fittest novelty might not be more efficiently simulated rather than blow (most of an organization’s) resources on discovering the viable contributions in Real Life(tm)?
And to come full circle, Zen, I organized this question based upon my reading of the first couple of chapters of Barabasi at the library yesterday. (!) The mathematical purity of the Erdos-Renyi random graph links struck me, and was clearly still with me when I read the GG post. I made a connection between random graph links and random simulation outcomes: so, get at Robb’s C) by conducting stochastic experiments under the domain in question, use the fittest solutions to serve as top-down *suggestions/friendly guidance* in B), and let the commanders vet them in real-time, as well as create and sythesize solutions on their own (even a perfect simulation cannot match the randomness of the real thing, eh?) I might call this Informed Self-Organization if I thought it was much more than a passing fancy on my part.
April 5th, 2008 at 3:38 pm
Moon,
.
Once you start framing the issue that way, it begins to sound like a genetic optimization algorithm with a human-in-the-loop to serve as the heuristic and to assist in generating new populations (human synthesis supplements the random mutations and crossovers of the genetic algorithm).
April 5th, 2008 at 6:56 pm
Wiggins, that’s exactly what I thought it sounded like to me, too. Use human insight to knock the heuristics out of local extrema. (I’ve only tinkered academically with GAs though — just a layman here.) Given my MS&G faux pas, thought I would avoid the tek talk though: operator non-engineers tend to distrust engineer non-operators. ;-\
April 5th, 2008 at 8:21 pm
[…] example, in a discussion in the comments section over at Zenpundit, Moon and I discussed the applicability of genetic optimization algorithms to the problem, as well […]
April 5th, 2008 at 8:22 pm
Even if one could get past the buy-in challenge, there would be a timing issue. If human intervention is required for each iteration of the algorithm, then the iterations necessary for a GA become prohibitively time-consuming and therefore impractical.
.
BTW – I’ve started a thread over at OSD, so we don’t hijack Mark’s generosity with our geeking out.
http://opposedsystemsdesign.blogsome.com/2008/04/05/decentralized-decision-making-and-heuristics/
April 6th, 2008 at 4:22 am
Thank you Wig. As Genetic algorithims far exceed my humble mathematical education, I’ll take the discussion in at OSD 🙂