Pervasive, ad-hoc networking needs robust, effective peer-to-peer platforms. And to prevent the ‘silo effect’ — everyone having to develop their systems from the ground up, reinventing solutions to the same problems — we need standards which are general, scalable, and flexible. Unfortunately, there’s a rather alarming blindspot in many otherwise excellent contenders for p2p standardisation: the need to move digital stuff – data of substantial size – between peers robustly, verifiably and securely.
Back in the day, one insight behind the idea of Being Digital was that bits are easier and cheaper to ship than atoms. e-prefix your business processes, rip and scan your media, and welcome to the frictionless economy, where weight, distance and transaction costs no longer ate into margin.
And the beauty of making stuff digital is that whatever the bits represent (a movie, songfile, a novel), they can be transferred simply over the same kinds of protocols — once it’s possible to transfer one kind of thing from point to point, all the rest simply follow: HTTP, for example, quite happily transfers anything that can be digitised. The Web was inherently multimedia-capable from day one.
But ‘things’ made from bits have their own analogs of weight and cost. In particular, for large files or high-resolution data streaming, network bandwidth and quality of service (QoS) massively constrain the what, when and how of data transfer.
Recently I’ve been reviewing fairly critically a number of open, generalised peer-to-peer platforms. It seems that the need for robust transfer of ‘stuff’ isn’t really being attended to by many of the key players: while many systems are elegant, they’re mostly avoiding the key issue of heavy lifting: how to actually move serious amounts of digital stuff peer-to-peer, efficiently and verifiably intact: the common conceit is that inband traffic must consist solely of lightweight XML exchanges, with heavy lifting is displaced elsewhere: out-of-band. Want to chat p2p about your ripped copy of Stalker? JXTA or Jabber will work fine. Want to actually send it p2p over a lossy network? The official line is that there are already other ways to transfer heavy data: FTP, HTTP etc: protocols each with their own problems in the real world of unreliable, firewalled, ad-hoc environments — exactly the kinds of environments in which the p2p messaging protocols themselves are designed to function. Or you can use very inefficient inband encoding, and start building up your own silo of code to ensure veracity and robustness.
Neither alternative is good enough. For p2p to get beyond yet more implementations of instant messaging, for it to become generally useful for moving stuff as well as chatting about stuff, someone needs seriously acknowledge the need for heavy data lifting, and to build it into the otherwise elegent p2p platforms.