Is there an Intel Ark for the Coming of the Exaflood ?
An intriguing post from the loudly mysterious Kent’s Imperative:
“SIGINT in the exaflood environment ”
“There has been a lot of talk recently regarding the implications of the rising rate of data exchange for policy issues such as network neutrality and broadband penetration. The term exaflood – coined by one particularly lobbying group – is apt enough, even if one doesn’t necessarily agree with their proposed solution approaches.
….Traditional SIGINT techniques – even within the relatively new realm of digital network intelligence – are the products of an earlier era, in which the target set and its emanations were distinct enough from its environment to be amenable to capture and analysis using a certain degree of discrimination. The kinds of intelligence that will be required against the adversaries of tomorrow will be increasingly less able to rely on the traditional tradecraft which is undergirded by such assumptions.
We do agree with the statement, frequently attributed to former Assistant Director of Central Intelligence for Analysis & Production Mark Lowenthal, to the effect that “there is no such thing as information overload, only poor analytical strategies.” However, the exaflood will challenge both collection and analytical strategies such as never before. Against this backdrop, we look to the continuing infrastructure, language, and human resources challenges faced by those in this section of the community, and greatly wonder if our future community will be adequate to the task.”
Read the rest here.
Hmmm. What does this mean then? Will the digital environment itself be the target with “the system” set to by stymied by ( and thus alert human operators to the existence of) processing of data pattern anamolies ? Looking for “non-haystack”, however defined, to stand out from a sea of carefully studied hay? How do we know the exact parameters of a continuously evolving complex system of systems of networks ? My head spins.
I am thwarted in my attempt to comprehend by my inherent non-geekiness. My kingdom for a slide rule!
December 13th, 2007 at 12:03 pm
Whether one is an individual or an organization, one’s schemata in long term memory provide memorized routines to solve many problems — but one’s poor working memory (cognitive capacity, CXO-level capacity, etc) make searching long-term memory on the fly a terrible strategy.
What’s needed is the proper orientation, so one can continue operating even as input patterns change, making decisions only on a narrow range of information, and allowing automaticity to take care of the rest.
December 13th, 2007 at 6:21 pm
"What’s needed is the proper orientation, so one can continue operating even as input patterns change, making decisions only on a narrow range of information, and allowing automaticity to take care of the rest."
It would seem to me that what a virus would love most would be automaticity. It would be very simple to get inside your OODA loop by bypassing your Orientation and changing your observations. Once you begin to use automaticity, it really means the end of your orientation because you could never trust your Observations once they are understood, changed and replicated by the virus.
In fact these viruses would be attacking that which you have automated, so change blindness would be hard to control. These viruses would be attacking you in a very 5GW way.
While speed does represent a simplifying process, would speed be worth the volunerability and the lack of resiliency to starve off an attack that, to me, automaticity represents?
December 13th, 2007 at 9:54 pm
Good catch on automaticity, observation, and 5GW.
The way to defend against this is through an autoimmune system that throws disrupts microbial pathogens. (Viri are inert, and so incapable of any form of will.)
"While speed does represent a simplifying process, would speed be worth the volunerability and the lack of resiliency to starve off an attack that, to me, automaticity represents?"
As a practical matter, it’s not really an option… Working memory capacity at any scale is so limited that the alternative is not decision-based control over all processes: the alternatives are anxiety (decisive control to the extent that it injures the equilibrium of the system) leading to paralysis.
Re: system resiliency, another good point, and the reason for defensive 5GWs.
December 14th, 2007 at 10:29 pm
"(Viri are inert, and so incapable of any form of will.)"
I guess you mean that Viri would not maliciously attack. If you gave them what they want, in the form of some kind of pathogen, they wouldn’t keep adapting and attacking the automation process.
I think you are right about the malicious part, but wrong that the viri are incapable of any form of will. I think they are capable of an induced will caused by the area of non-random movement of events that automation represents. These areas of non-random movement act like the loops of a transformer and cause induced currant into the viri as a form of will.
Global Guerrillas also has this note on automation.
"NOTE: Cyberwarfare, although nascent today, will become a major form of warfare in the next decade as computing power increases by 100 fold and computer automation creeps into every nook and cranny of the global economy. "
I guess what I am really asking is that as you write more code to simplify a complex situation, created by too much code, what stops you from just inducing more complexity into the situation, especially after spreading all these pathogens around? While you are kind of dividing the load up with automaticity, the load is actually increasing exponentially, mainly because of this induced current. Or is that the whole point of automaticity, Creating resources to those with resources.
December 15th, 2007 at 5:40 am
"I guess you mean that Viri would not maliciously attack. If you gave them what they want, in the form of some kind of pathogen, they wouldn’t keep adapting and attacking the automation process."
Rather, viri only attack, and are incapable of reason. They thus fall into the category of non-human threats (along with volcanos, asteroids, etc), and fall outside the analysis of warfare.
"I guess what I am really asking is that as you write more code to simplify a complex situation, created by too much code, what stops you from just inducing more complexity into the situation, especially after spreading all these pathogens around? While you are kind of dividing the load up with automaticity, the load is actually increasing exponentially, mainly because of this induced current. Or is that the whole point of automaticity, Creating resources to those with resources."
Could you rephrase?
December 16th, 2007 at 7:50 pm
Wow! After re-reading this I am not sure.
I guess I was asking a retorical question. What worth would a system have that needs to use automaticity?
December 16th, 2007 at 7:54 pm
I would agree with the other comments–that the answer to this is to sharpen analytical capability.
December 16th, 2007 at 11:21 pm
Granted, but why from your orientation? Dan seems to be using automaticity in his own orientation. But then I might just assume that, which could make an ass out of u and me.
December 17th, 2007 at 1:19 am
Here are some thoughts I just noted down, unshorn of poeticisms, unvetted, uncertain as to quality or impact, in response to your recent blog entry re an ark for the exaflood.
Precis: I’d not give my kingdom for a slide rule — I’d give it for a human mind!
There are a great many ventures in the world today which respond to the complexity of life by recourse to the growth of technologies, so that the most sophisticated devices which can be devised beat up the vast majority of our attention, and only those who are able to enter the inner sancta of the great cyclotrons, intelligence services, databases or whatever can even hope to be privy to the best information.
At an almost invisible other end of the scale are a precious few, in my view, for whom our best hope lies not in the farthest extraordinary but in the closest, in the subtle processes of the human mind itself. For us, the most devastating breakthroughs seem likely not where the human learns more and more about less and less until he or she stands at some absolute twig tip of the tree of knowledge and reaches a nano-glimpse farther, but where she or he pushes back down again into the very roots, asking those questions they first asked as children, but more determinedly now, less willing to accept no for an answer, more rigorous in their probing — and come up with clarifications and nuances beneath the surface of the known and agreed, hearing if you will the siren calls of shadow birds perched on the roots and singing.
For us, all knowledge is within its acorn, and seven, as the man said, plus or minus two, is about the limit of its moving parts.
The combinatorial potential of the human brain is the engine behind the interface, but "I can carry about six thought-units in my head at one time" is the specification and current limitation of the interface itself — and that is the bandwith at which both comprehension and communication must inevitably be pitched.
In terms of intelligence:
The novelist — John le Carre springs to mind, Graham Greene — the mythographer, the theologian — in these times of fatwa-guided munitions — the poet, the historian… These were the intelligences receiving and massaging intelligence from Walsingham’s day forward, and should be still.
What we require are humans with minds as multi-faceted as a fly’s eye, peering at issues from all angles, querying all assumptions, empathizing, critiquing, probing, proposing, disagreeing, qualifying, synthesizing — conducting thought experiments in a lab built to support the free physics of dreams, the impossibles, implausibles and imperceivables — for those three comprise the only "left field" from which we can be, and ever and again are, blindsided.
Incomplete, admittedly — but it gives you an idea of my own thinking on such matters…
December 17th, 2007 at 5:37 am
I arrive late but I am here.
If you have written anything further in this regard, beyond what you sent me a while back, I’d like to see how your research has progress.
I’m not sure I understood you correctly either. Could you expound a bit and clarify ?
"The combinatorial potential of the human brain is the engine behind the interface, but "I can carry about six thought-units in my head at one time" is the specification and current limitation of the interface itself "
There is also processing speed to consider but you have basically summed up the constraints, to which relatively few ppl are even operating up to that limit; or recognize that modalities of thinking and paradigmatic worldviews represent tools that can be consciously utilized.
Socially, our readiest tool for mass improvement in cognitive ability is public education but the current system is so far from inculcating optimal thinking practices ( and getting further away under the systemic effect of NCLB) that it’s difficult to even decide where to start rectifying matters. Even our higher education institutions, particularly in the undergraduate humanities and social sciences, have moved away from pure analytical rigor without imparting intuitive pattern recognition, imaginative insights or synthesis.
December 17th, 2007 at 9:57 pm
>>> Even our higher education institutions, particularly in the undergraduate humanities and social sciences, have moved away from pure analytical rigor without imparting intuitive pattern recognition, imaginative insights or synthesis. <<<
I’d like to pick up there and say a little more, perhaps getting somewhat less poetical this time around, but still with an eye on the human and "liberal arts" end of things.
As I understand it, the Tibetans — if I may stray just a tad — put as much effort into learning visualization for their Geshe degree as we do into verbal learning for our PhDs. In a recent post in answer to a "crystal ball" question about the view from 2020 or 2025 on the gamer-academic’s blog Terra Nova, I suggested:
We are going to understand that handling emotions as we pass through events is very like taking a truck loaded with a variety of volatile explosives through a series of high mountain passes, and will have begun to construct sims which allow us to practice the appropriate moves away from reality and its disastrous consequences.
This will, among other things, require us to internalize a set of kinaesthetic – synaesthetic feeling shapes, which will represent complex spaces and webs of tensions, constantly in weave and process, as something akin to what we now know as cat’s-cradles – but present to the mind’s-eye rather than strung between the hands just in front of the body. A marriage could be seen as one such cat’s-cradle, the requirements of different children’s personalities, the impact of in-laws or affairs, as additional fingers causing constant shifts in the tensional pattern.
To be able to express such patterns in communicable form, in two- or three-dimensional representation of shifting n-dimensional tension-webs, will be the preoccupation of our brightest minds, whether working on primitive "string webs" and mudra-like finger-dances for interpersonal expression, or the development of more subtle software tools for mutual visualization.
Or that’s my guess.