Subliminal Feeds

A question for glanceware practitioners: how to convey multivariate, multistate data at a glance. You have to choose media which we are very good at parsing at a glance. Chernoff faces were an early attempt at this. I guess music would be another good medium to explore. We want users to know things without necessarily knowing how they know, or when they were informed. At one level, glanceware should make knowledge transfer subliminal. Which itself raises the question of when and why we need to convey quantitative or qualitative information.

Imagine a glanceware system to monitor a stock portfolio. Do you need to know what price a given stock is at, or just to pick up, say, a feeling of unease about a particular sector, without necessarily knowing why, with the qualitative data being available to possibly a longer glance, or more conventional drilldown?


  1. glanceware seems to confront, and come full-circle back to, the ever-present question: “Why are our GUIs (and their underlying systems) so stupid?”
    It’s easy to create a glaceable construct for a simple “state” (eg Homeland desktop) but when you need to simplify multivariate & multistate data to the point where user parsing of a GUI could be deemed glanceable, then you start needing some pretty sophisticated computer parsing … which IMHO is very immature at this stage (especially when you move away from dedicated-task machines like a broker’s stock-awareness system). There’s an abstraction layer between feeds and GUIs that needs to evolve to the point where we can interact with it in order to conquer stareware (excuse the neoligism)

    SPAM is a good example of failure on all these levels and is, perhaps, the best example of applications (despite themselves) moving away from glanceable and being sucked into stareware space.

  2. A lot of the thinking about ‘situated intelligence’ is about our arrangement of the world into patterns we can then process without taking up conscious thought through our use of inate (back-brain) pattern matching (c.f Andy Clark’s comment on wristwatches, or indeed written language appreciated as gestalts of words).

    The attentional suck of tech is at least partially becuase we are presenting data in a form which isn’t parsable without conscious attention — we should as you say be using the machines to do that parsing, at which point the interface becomes almost by definition glanceable.

    I seem to remember a story about someone who could identify recorded music by looking at the grooves of a vinyl recording. However for most of is the process is made much simpler by playing the record on a turntable…

    That’s the level of difference between where we are with interfaces, and where we should be.

  3. Another thought: Interfaces have been traditionally been evaluated (to the extent that they have been at all!) by the way people interact with them. Probably because the function of most interfaces (historically) has been to assist document production or information entry/processing. But much of what we want from interfaces now is to simply keep us aware. We don’t want to have to interact at all. Even the ubiquitous computing people seem to be thinking in terms of involvement. I want interfaces and systems which are inherently uninvolving — like glanceware, radio-of-me etc.

  4. one thing that emerges with your last comment is a seeming sense of the impossibility of uninvolving interfaces (of aggregated tasks, lets say) presented as a “single spacial” feed to the user. Glancing has a discrete quality that implies discrete spacial constructs.
    Perhaps, also, the more uninvolving you want your interface, the more pre-involving that vaporous filterface needs you to be. Is it a zero-sum game?

  5. I don’t think its necessarily zero-sum. Have a look for example at spacemonger, a tool for visualising server space usage. Very simple, but allows very glanceful appreciation of who is cramming up your disks. Very little work for the tech (applying a nice space-packing algorthm, which its good at), but it produces something very amenable to spatial pattern matching.

    Admittedly its only mapping one parameter explicitly, and that to something analagous (disk ‘space’ mapped as area), so it isn’t an example that generalises, but I’m not sure if there will be a general answer — a lot depends on the meaning of the data — should a glance convey value, quality, emotion etc…?

    Pick a problem domain and lets see what we come up with. I’m also sugegsting that everything around can/should be considered as media, not just a single interface place…

  6. Good example.
    What I meant by “single spacial feed” is more a matter of the relationship between glaneware’s domain (in toto) and the attentional possibilities of the user. Meaning, a user can glance only so much and at some point multiple (many) glances approach a stare. You start with a nice bit of glanceware (say some color-coded desktop thing) and then as the modules build, they have to combine at some point (or at least channels of them do == layers) to remain glanceable. I feel this is the key to a real usable solution. Much like a seemingly simple desktop color thing … it has to combine with other things going on or its not visible.
    Problem domain: something to do with messaging. In a sense the most tautological of glanceware I suppose — but this enforces a sense of awareness of layers/channels which in other doamins is easy to avoid. Something about mail+IM+news+alerts+whatever that glaceable stuff is ideal, and the only solution, for. Pondering.

    Feeds need a “glance coefficient” maybe? The message maybe marked ugent but the sender has a GC of -1 so it doesn’t make it to my phone.

  7. Yep. Pretty soon the same problem (managing attention) occurs within/between glance-feeds. How do we manage in the real world? I guess perspective and selective memory, the fovea centralis and beer all play a part…do you listen to the voices or ignore them. holographic surround sound might be a place to start (if we want to jump to implementation technologies) — i’ve never had a problem glancing through conversations in crowded rooms or parties to pick up on the interesting stuff, while my focussed attention is somewhere else…

    just listen to the voices…

  8. actually, the technologies considered to be more intimite (mobile phones & pagers) seem to be further along with this — customised ringtones so you know who is calling without having to take your phone out of your pocket, vibrating ring so you don’t disturb the world around you etc. I wonder if there’s something about computers being a tool that takes hands and eye to use that has made people accept its attentionally consumptive interface — tools which take hands and eyes, like the axe, need continual focussed attention to avoid harm…whereas say mobile phones, being promarily aural, are seen as being part of conversations, which are inherently social and thus need be discreet..?

  9. there’s a clear oscillation between agglutination and dissection of our tasks and tools. mainframe & terminals (thin) or powerful desktops (thick)?; cellphone & pda & desktop or a human assistant?
    the decision to go with one device or many?
    — juggling hands & eyes while weilding axes (i love that metaphor btw) vs. “safe and simple” attention grabbers —
    the aesthetic/ergonomic/fear-driven factors seem to outweigh most technical considerations. until the technology “just works” (like a vibrating phone seems to) its hard to move toward those clearly deliniated “modes”.
    when the tools are assimilated into the body (as a vibrating phone is) kinaesthesia kicks in — safe and easy.
    when we have to map so much stuff with our eyes (screens) it gets tiring as hell. that has happened, obviously, because visual mapping has been the easiest thing for the techies to do.
    this morning, pre-coffee(!), I’m feeling that the actual attention grabbing mechanism is the easy part IF the algorithms are there to determine if, say, “a vibration means somebody is trying to call you and not that your email contains a virus” (boring example)
    if “vibrate” starts acquiring different modes it will become as annoying as flashing things on your screen.
    maybe I’m suggesting this: assume that we have close to enough attention grabbing machanisms, is the real problem deciding which ones mean what and how to narrow the innundating feeds to the point where specific mechanisms are “gently triggered”?

    sorry, this goes nowhere. just mapping things out.

  10. Well, language meets the requirements — assigning meaning relatively unequivocably to symbols, glanceable, multifacted etc. Maybe we are at some prevocal stage with all of this — all sound and furry (tactile), signifying fuck all. Either we push forward evolution or use the tools we have. Maybe text/spoken word is a way to do this without ambiguity and without channel/meaning overload…(sounds like evading the issue, but why? we’ve evolved alongside language for x^y years for exactly analagous reasons)

  11. ok, then, a proposition: we have invented exactly two distinct tools: the word and the axe. All other inventions are homologous to them. These are the two tools we have to hand for any endevour. We ‘just’ need to divide it into subtasks each of which clearly requires one or the other. Discuss.

Leave a Reply

Your email address will not be published. Required fields are marked *