zenpundit.com » Blog Archive » Heavy breathing on the line: The words of the prophets are written in the circuit switching

Heavy breathing on the line: The words of the prophets are written in the circuit switching

[dots connected by Lynn C. Rees]

Sigh

Sigh

What did Lucius Aemilius Paullus know and when did he know it?

The path inaugurated by that question follows a specific route. A route is lit up by every move you make and every step you take.

At least if online.

In the beginning there was the circuit…

It followed a particular circuitous route. Each stage was preset. Once preset, you crossed your fingers and hoped the route could be unbroken.

If the path was broken, there’s a chance your fyrd Rohirrim friends with their non-existant cavalry might not get to Hastings the Pelennor Fields in time to defeat Guillaume le Batard Sauron and his Normans Orcs in time.

That would be a real poke in the eye.

Communications can be time sensitive. Not all of us have time to fight in the shade.

Or sit down to a meal.

Or have enough helots to pick up a new hobby.

Sometimes we need John Boyd restlessly pacing the sidelines heartily cheering “faster! Faster!! FASTER!!!”

And that’s why the NSA records (meta)data on all Americans.

17 Responses to “Heavy breathing on the line: The words of the prophets are written in the circuit switching”

  1. Lynn Wheeler Says:

    a little x-over on privacy from previous in this thread (privacy constraints have been significantly relaxed since that time, also some common players between financial industry and gov).
    https://zenpundit.com/?p=23942#comment-118732
    *
    fast … nearly 20yrs ago we were brought in to large financial transaction outsourcing processor (something like 500m accounts on behalf of various financial institutions). they wanted to enable banks to be able to offer target marketing services to merchants. the idea was that there would be daily account profile updates (of all electronic transactions in the US that day) and monthly offer profile matching. The problem was that their technique would have resulted in daily updates taking weeks of elapsed time to run and monthly operations taking years of elapsed time. Our task was to reduce daily (everything that happened that day) to an hour or less and monthly (longer term patterns) to 8hrs or less (more than a thousand times faster).
    *
    the information was anonymized and banks could allow merchants to specify profile for the offers … and then the banks would notify their customers (that matched profile) of the merchant offers (charging merchants a fee for the service) … with the customers deciding whether to followup on the offers. there were frequent privacy reviews and audits of the program and implementation by more than a dozen privacy organizations (aka trust but verify). I offered to also implement householding … i.e. recognize collections of accounts and people patterns … but the banks were worried about loosing account control (where group might have accounts at multiple different institutions). Since then some of the techniques have also been used for merchant loyalty card programs
    *
    At a computer DBMS conference … one of the attendees from the gov. said offline that he would have to stop using credit cards since I would know more about them and they knew about me (obviously wasn’t use to frequent reviews/audits by large number of privacy organizations … might consider the theme if I could do it, why couldn’t they). Note that advances in technology over the last 20yrs, would easily accommodate the whole world.
    *
    for other drift: The other hacking scandal: Suppressed report reveals that law firms, telecoms giants and insurance companies routinely hire criminals to steal rivals’ information. Suppressed official report accuses respected industries of hiring criminals to steal rivals secrets. Yet an official report into their practices has been suppressed
    http://www.independent.co.uk/news/uk/crime/the-other-hacking-scandal-suppressed-report-reveals-that-law-firms-telecoms-giants-and-insurance-companies-routinely-hire-criminals-to-steal-rivals-information-8669148.html

  2. Lynn C. Rees Says:

    Hi Lynn,

     

    Have you ever produced a high-level overview of how you’ve seen information technology (especially for databases), government, and corporations evolve (and snake) together since the inception of your career? That would be useful for the ongoing dialog on how politics shapes technology and how technology (shaped by politics) reaches back and shapes technology.

     

    We’d need to translate it from tech to civilian. My next posts will be an extended experiment in whether the political economy of the IPv4 network stack (starting with the political implications of circuit vs. packet switching) can be explained to those who’ve never heard of IPv4.

     

    I may upgrade it for IPv6. But I may not. No one else is upgrading to IPv6.

     

  3. Lynn Wheeler Says:

    random stuff … from long ago and far away
    *
    As undergraduate in the 60s, univ. library got an ONR grant to do online catatlog. Project was selected as beta-test for original IBM’s CICS transaction monitor product. I got tasked with supporting and debugging CICS. In the 90s, got involved with the NIH’s NLM … some people still there that had started the implementation in the 60s … which was very similar to what I had worked on … but w/o CICS … doing their own platform from scratch. misc. past posts mentioning CICS &/or BDAM
    http://www.garlic.com/~lynn/submain.html#cics
    *
    In the 70s, was involved in the original relational/sql implementation at SJR and worked with Jim Gray. When Jim left for Tandem, he was palming bunch of stuff on me … including consulting with other DBMS organizations and interfacing to customers. Misc. past posts mentioning original relational/sql implementation
    http://www.garlic.com/~lynn/submain.html#systemr
    *
    In the 80s, was working with NSF and some of the NSF supercomputer centers. We were suppose to get $20M dollars to tie together all the centers. Congress cut the budget and some other things happened, and then NSF releases an RFP. Internal corporate politics prevented us from bidding. Director of NSF tries to help by writting a letter to the company, copying the CEO … but that just made the internal politics worse (as did comments about what we already had running was at least five years ahead of all RFP responses). This becomes the NSFNET backbone which then morphs into the modern internet. Some past posts
    http://www.garlic.com/~lynn/subnetwork.html#nsfnet
    reference here on evolution from interconnecting NSF supercomputer centers
    http://www.technologyreview.com/featuredstory/401444/grid-computing/
    *
    80s and early 90s we were also working on cluster scaleup for supercomputers … but also scaleup for RDBMS. old post about meeting in Ellison conference room about scaleup to 128-way, early Jan. 1992
    http://www.garlic.com/~lynn/95.html#13
    some old related email
    http://www.garlic.com/~lynn/lhwemail.html#medusa
    within hrs of the last email in the above … cluster scaleup is transferred and we are told we can’t work on anything with more than four processors. By the middle of Feb1992, cluster scaleup was announced as supercomputer for scientific and numeric intensive *ONLY* (no commercial, and no DBMS). This is major motivation in decision to leave later in 1992. One of the things done was some scaleup tricks for DBMS logs and how to recover distributed logs in correct sequence. At the time, most people are apprehensive about the complexity, but decade later, I’m contacted because several people feel confident enough to tackle the problem. Some past posts
    http://www.garlic.com/~lynn/subtopic.html#hacmp

    After we leave, we are asked in to consult with small client/server startup. Two of the other people in the Ellison meeting are now at the startup responsible for something called commerce server and they want to do payment transactions on the server, the startup had also invented “SSL” they want to use; the result is now frequently called electronic commerce. I have to audit SSL and associated infrastructure (like certification authorities, selling SSL certificates) and come up with number of requirements for use and deployment. I have final sign-off on interface between commerce servers and payment gateway (that talks to financial networks) but only advisory on the client/server side. Almost immediately requirements are violated … contributing to many exploits that continue to this day. The financial infrastructure is used to diagnosing circuit-based infrastructure between financial operations and merchants. I have to remap all this to packet-based infrastructure including new processes, diagnostics, and procedures (standard is trouble desk does 1st level problem determination within 5mins for circuit based infrastructure … I have to try and match this for packet-based infrastructure).

    disclaimer: at one point in the past, I’m asked if I would be interested in running the gov. IPv6 program … but that isn’t anything I really do.

    I also do an online index of internet RFC standards (and before he died, the long-time RFC editor used to let me do part of STD1)
    http://www.garlic.com/~lynn/

    in part because of RDBMS scaleup work, we are asked to consult on the 2000 Census … 1996/1997 they are replacing 20yr old dataprocessing equipment getting ready for 2000. When they have audit … I get stood up in front of the room for a day of answering questions.

    last decade (before he disappears), Jim Gray is running San Fran m’soft research and cons me into interviewing for chief security architect in redmond. the interview goes on for 3-4weeks, but we can’t come to agreement.

  4. Lynn C. Rees Says:

    Hi Lynn,

     

    Tell us more about the late great Jon Postel. He plays a key part in our story.

     

    Jim Gray did ACID? He has provided me with many a brick to throw at those who hate transactional RDMSs merely because they use SQL. RDMSs have their place. “NoSQL” databases have their place…Preferably far away from something important.

  5. Lynn Wheeler Says:

    part of ACID
    http://en.wikipedia.org/wiki/ACID
    was having benchmark comparing apples-to-apples lead up to tpc benchmarks
    http://www.tpc.org/information/who/gray.asp
    but ACID was also important for auditors of financial systems and trusting results … tribute to jim after he disappeared
    http://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html
    Gray is known for his groundbreaking work as a programmer, database expert and Microsoft engineer. Gray’s work helped make possible such technologies as the cash machine, ecommerce, online ticketing, and deep databases like Google.

    … snip …
    *
    I claimed system/r (and subsequent rdbms) made trade-offs for financial transactions … i.e. pre-organized data in table met that all data from financial transaction was in single record (coming much closer to IMS thruput for financial transactions). This has downside of needing data prestructured into mostly homogeneous. Same time, involved in system/r … i was also involved in relational implementation that wasn’t table and wasn’t prestructured and allowed arbitrary any-to-any bi-directional relations. I use it for the RFC standards information and the merged glossary and taxonomy information … with application generating static HTML. Not requiring homogeneous prestructured makes a lot of real-world easier. One of the side-effects is it is trivial to use the result of queries for adding, changing and/or updating the structure of the RDBMS (can be used for automating and simplifying discovery).
    *
    Late 90s, real storage was getting large enough that started to see “in-memory” RDBMS … that were significantly faster than traditional disk RDBMS (benchmark of in-memory RDBMS ten times faster than traditional RDBMS even with all data cached in memory). Telcos started using “in-memory” RDBMS for call-records. Late 90s, there was also analysis that telcos were going to take-over credit/debit business because they could handle the expected explosion in micro-payments (which has yet to happen).
    *
    RFC-Editor function was run out of USC/ISI … (total aside former head of IBM research computer science went to head up ISI). postel wiki
    http://en.wikipedia.org/wiki/Jon_Postel
    *
    Postel STD1 had a whole bunch of inconsistencies and after loading all the information … i did simple knowledge rules for identifying inconsistencies. That was when he let me start doing part of STD1 and cleaning up the inconsistencies. HTML is one-direction relations, so have to present lots of different views in attempt to simulate original. My author index has whole bunch of entries for Postel.
    *
    Trivia: CTSS runoff was migrated to CMS in the mid-60s as “script”. GML was invented at the science center in 1969 and GML tag processing added to script. decade later, GML morphs into international standard SGML. After another decade, it morphs into HTML at CERN (i’ve been doing markup language for over 40yrs).
    *
    Trivia: I had equipment in a Interop88 booth … sunday before start of conference, floor nets start crashing … lasted until almost start Monday morning before problem was identified. Resulted in requirement in rfc1122.

  6. Lynn Wheeler Says:

    Note that some of the CTSS people went to the 5th flr for project Mac and did Multics … some of which later morphs into unix. other of the CTSS people went to the science center on the 4th flr and did virtual machines, various online computing, the internal network, lots of performance and workload profiling (some of which eventually morphs into capacity planning) … as well as GML. The internal network was larger than the internet from just about the beginning until late 85 or early 86. The big change for arpanet/internet was the 1jan1983 change-over to internetworking protocol … at that time, there was approx. 100 IMP network nodes and 255 connected mainframes … while the internal network was close to passing 1000 nodes. I’ve claimed that internal network had effectively a form of gateway in every node since the beginning … significantly simplifying adding nodes and infrastructure. Aparnet was restricted by tightly controlled IMPs (which it was freed from 1jan1983). misc. past posts mentioning internal network
    http://www.garlic.com/~lynn/subnetwork.html#internalnet
    the corporation supported educational network called BITNET (and EARN in Europe) that also used a form of the internal network technology … also more nodes than arpanet/internet in the first part of 80s … some past posts
    http://www.garlic.com/~lynn/subnetwork.html#bitnet
    *
    At interop88 … there were a lot of vendor booths offering OSI products (instead of TCP/IP) … the federal government was mandating everything move to OSI (GOSIP) and the elimination of the internet. I was on the XTP technical advisory board doing high-speed protocol. It was taken to the ANSI networking standard body … and was rejected. As ISO body it was under requirements that it only does OSI conforming standards. HSP:
    1) supported internetworking protocol … which doesn’t exist in OSI
    2) had high-speed path from transport layer directly to LAN MAC interface … bypassing layer 3/4 interface
    3) had high-speed path directly to LAN MAC interface … which doesn’t exist in OSI … approx. corresponds to the somewhere in the middle of layer 3.
    *
    lots of federal gov. was surprised that it couldn’t stamp out the internet and mandate everything move to OSI.

  7. Lynn Wheeler Says:

    triva:
    http://www.rfc-editor.org/repositories.html

  8. Lynn Wheeler Says:

    trivia

    Washington, D.C. and Geneva, Switzerland — 26 June 2013] The Internet Society today announced the names of the 32 individuals who have been selected for induction into the Internet Hall of Fame. Honored for their groundbreaking contributions to the global Internet, this year’s inductees comprise some of the world’s most influential engineers, activists, innovators, and entrepreneurs.

    The Internet Hall of Fame celebrates Internet visionaries, innovators, and leaders from around the world who believed in the design and potential of an open Internet and, through their work, helped change the way we live and work today.

    The 2013 Internet Hall of Fame inductees are:

    Pioneers Circle – Recognizing individuals who were instrumental in the early design and development of the Internet:

    David Clark, David Farber, Howard Frank, Kanchana Kanchanasut, J.C.R. Licklider (posthumous), Bob Metcalfe, Jun Murai, Kees Neggers, Nii Narku Quaynor, Glenn Ricart, Robert Taylor, Stephen Wolff, Werner Zorn

    Innovators – Recognizing individuals who made outstanding technological, commercial, or policy advances and helped to expand the Internet’s reach:

    Marc Andreessen, John Perry Barlow, Anne-Marie Eklund Löwinder, François Flückiger,
    Stephen Kent, Henning Schulzrinne, Richard Stallman, Aaron Swartz (posthumous), Jimmy Wales

    Global Connectors – Recognizing individuals from around the world who have made significant contributions to the global growth and use of the Internet:

    Karen Banks, Gihan Dias, Anriette Esterhuysen, Steven Goldstein, Teus Hagen, Ida Holz, Qiheng Hu, Haruhisa Ishida (posthumous), Barry Leiner (posthumous), George Sadowsky

    “This year’s inductees represent a group of people as diverse and dynamic as the Internet itself,” noted Internet Society President and CEO Lynn St. Amour. “As some of the world’s leading thinkers, these individuals have pushed the boundaries of technological and social innovation to connect the world and make it a better place. Whether they were instrumental in the Internet’s early design, expanding its global reach, or creating new innovations, we all benefit today from their dedication and foresight.”

    … snip …

    trivia: note my web pages include a lot of Internet Society (internet standards) and few years ago I got a take-down notice with threat of tens of millions in fines. I had to hire a copyright lawyer … to explain the copyright law to the Internet Society.

  9. Lynn Wheeler Says:

    co-worker from science center
    http://en.wikipedia.org/wiki/Edson_Hendricks
    he had been put forward for IHOF after publication of this
    https://itunes.apple.com/us/app/cool-to-be-clever-edson-hendricks/id483020515?mt=8
    but much of IHOF turns out to be ARPANET (not Internet)

  10. Lynn Wheeler Says:

    The Edson reference discusses Postel working on basis of internetworking by the mid-70s. It isn’t clear that the gov. forces realizes how much the transition from IMPs to internetworking was going to significantly relax the rigid top-down command&control they had with IMPs. Later in the 80s, they tried to put the guenie back in the bottle by trying to stamp out the internet and force migration to OSI.

  11. larrydunbar Says:

    “But I may not. No one else is upgrading to IPv6.”

    *
    So if you wanted to jump off of a bridge, and no one else wanted to jump, you wouldn’t jump?

    *
    Come on, take the plunge! What are you, some kind of Snowden want-a-be!  Where is your faith, man!

    *
    Actually, I look forward to your post on packet switching. I am not sure the words of the prophets are on circuit switching, but any discussion between the clash between Sunni and Shia I find interesting.

  12. Lynn Wheeler Says:

    note that Ed’s wiki entry talking about Postel pushing for low latency transfer. IMPs were fairly rigid top-down command&control … they had significant IMP-to-IMP protocol chatter that scaled non-linearly … as number approached 100, there could be period where 90% of the 56kbit capacity taken up with protocol chatter. The IMPs also did end-to-end buffer reservation as congestion control … all scaling very badly.
    *
    For internetworking, Van Jacobson eventually did slow-start for congestion control … end-point buffer windowing.
    http://en.wikipedia.org/wiki/Slow-start
    However, shortly after slow-start came out there was an ACM SIGCOMM paper showing it non-stable in large heterogeneous network. Congestion control can be viewed as avoiding multiple back-to-back packets (trying to space out packet transmission). Large heterogeneous networks tended to exhibit returning ACK-packet bunching with multiple ACKs arriving back to the sender in clumps … which results in multiple back-to-back packet transmission.
    *
    Some time before, we had done dynamic adjusted rate-based pacing as a method of congestion control which was immune to behavior like ACK-packet bunching (interval between packet transmission explicitly controlled). My observation was Van Jacobson was trying to deploy congestion control on platforms that had poor or non-existent timer facilities … making timer dependent implementations impractical … forcing fall back to state-based window/buffering control. Recently rate-based mechanisms have showed up as things like “FAST TCP”.
    *
    disclaimer: I had done dynamic adaptive resource management as undergraduate in the 60s (that was shipped in standard vendor products) … which relied on timer-based rate calculations.

  13. Lynn Wheeler Says:

    this is starting to look more like what we were doing in the mid-80s:

    Google making the Web faster with protocol that reduces round trips; Chrome testers to get faster speed with QUIC, an experimental network protocol.
    http://arstechnica.com/information-technology/2013/06/google-making-the-web-faster-with-protocol-that-reduces-round-trips/

    note that HTTP & HTTPS on top of TCP resulted in lots of problems. Nominally HTTP is single request/response operation (more like datagram). TCP requires a minimum of 7 package exchange … and early implementations of TCP FINWAIT was consuming 90-95% CPU of some webservers as traffic ramped up in the mid-90s (before vendors did a major rewrite).

    XTP (from the 80s) had a minimum of 3 package exchange for reliable transmission (compared to based TCP of 7 packet with TLS requiring additional packet exchange on top of that) and I’ve frequently proposed how TLS would work over XTP (complete TLS/HTTPS all done in single round-trip). I also wrote up the design for how XTP would do dynamic rate-based pacing as congestion control mechanism.

    trivia: NOSC was also heavily involved in XTP … and XTP reliable multicast … for shipboard fire-control infrastructure … being able to survive extensive damage and keep operating.

  14. Lynn C. Rees Says:

    Bah. Lousy kinds and their newfangled toys. Next thing they’ll try to do is reimplement ATM on top of UDP. Then the token ring guys will come out of the closet and there’ll be anarchy.

  15. Lynn Wheeler Says:

    Careful … we took lots of arrows from the token-ring crowd in the 80s … infamous IBM FUD … even tho my wife is co-inventor on token passing patent a decade earlier. I have much longer winded pontification in recent post in a.f.c. on XTP et all
    http://www.garlic.com/~lynn/2013i.html#45
    and longer winded pontification on IBM T/R FUD
    http://www.garlic.com/~lynn/2013i.html#4

  16. Lynn C. Rees Says:

    Wait, wasn’t Bill Gates the one that said no one will ever need more than 16 Mbps? That may have been Chairman MAU…

     

    If only we were all blessed to have the token ring crowd for enemies. Hard to pull back a bow string when you need one hand free to catch the token. And you can’t fire without the token. Plenty of time to maneuver out of the way. Especially if you move faster than 16 Mbps.

     

    Boyd always struck me as an Ethernet guy. You can move at gigabit speeds and get inside the other fellow’s token ring. Though that’s not hard since the other guy’s only moving at 16 Mbps. That explains the first Gulf War.

     

    Ethernet assumes each interface will handle its own mess. Token ring expects you’ll always have someone to pass the buck to. Ethernet creates harmony from chaos. Token ring expects harmony exists without chaos.

     

    No wonder token ring’s easy to outrun. It’s always going in circles.

  17. Critt Jarvis Says:

    So…. Token ring is fragile, Ethernet is antifragile?


Switch to our mobile site