Documents with Tails: Blogjects

Evidently there’s a nice new ‘theory object’ neologism for the class of things of which Documents With Tails are members: blogjects. Thanks for Stephen for reminding me to read that paper.

Glanceware Music Navigation #2

For starters, drop the idea of a single rigid taxonomy — there are too many ways through, even assuming that canonical representations are possible. So we’re probably looking at something at least personal, possibly community-based. Folksonomic tagging would be a start, but how to navigate in a neatly glanceable fashion? I’m thinking of building a personalised acoustic surface, where patches of looped sound are snippets representing genres and subgenres, and which morph into one another ‘at the edges’ so you end up with a navigable 2- or 3-space which is a musical patchwork. ‘Drill down’ into any patch and explore …

Glanceware Remotes?

So. Sometime soon, broadbandwidth and QoS sufficient to stream 16/44k1 audio reliably, longhaul. And at some point a bit later, maybe, OMD aggregators which will be able to provide access to most of everything that way. On my mind at the moment is the question: how to navigate the whole of music space, in a glanceable fashion: minus clunky jogwheels and textual taxonomies. I’m thinking, as rules of the game, to allow only a 5.1 surroundfield and a remote useable one-handed, without any interactivity built into the remote itself — effectively a system which could be used in the dark, …

Ambient Orb

Tim points this one out. This is exactly what I mean by glanceware. That it is entirely non-linguistic is even better. Know that you know something, without necessarily being aware how you know it. Their Stock Orb is exactly what I was proposing here.

Glanceful Kunstkopf?

Thinking more about glanceful interfaces, and the communication of complex multivariate datasets. For reasons I haven’t gotten around to writing about here yet, I’m veering towards sound cues for a lot of things, particularly binaurally-located vocal cues. I’m looking for a pipelining spatialiser using some simple head-related transfer function (HRTF) that I can feed audio into at approximately realtime. For ‘earcons’, simple samples are easy. For vocal cues, I’m thinking of using FESTIVAL. But I need the spatialiser, and I can’t find one that runs on Linux and accepts a stream input. Maybe I’ll have to hack something in Max. …