Some complexities in Human Centred Design
I just read this blogpost in which Diane Golay, Phd student in HCI explains her values on HCI, which are strongly human centred. I fully subscribe to this point of view (hm, that’s from a song isn’t it? … what was it … O I think it’s followed by Russians loving their children too…) – Anyway: I just wanted to point out though that I think in the end it’s a bit more complicated. However, that should not lead us away from the basic values expressed in Diane’s post – it’s just that the facts of the matter about how humans and technologies interact aren’t always that straightforward as humanist conceptions may want to have them.
So what Diane and many HCD researchers with her hold up high is to see
computers as powerful tools crafted by humans to support other humans in their tasks and, to some extent, “enhance” their abilities.
And especially what I try to tell everyone entering a design education is this important lesson:
Instead of wondering “How can they not get it?”, we should ask ourselves “What did I not get?” (as a designer).
One thing that is not discussed however is that there are many humans on this planet. And most computers are in fact tools supporting humans quite well, only these humans are not what we call the “end-users”, who in turn may be very frustrated by that same system. In fact there are often multiple different “users” of computer systems and often it is no longer one person that is using the system but rather a whole organization or ‘society at large’.
So when I go through the security system in the Airport it’s eventually the government (which, in the end, is ‘all of us’) that is using that system to make sure I am not a terrorist, and as a consequence it may very well be that I, as a simple tourist, have a quite frustrating user-experience interacting with luggage checks and body scans and pass-port control machines that are slow and cumbersome and so on. Of course we should try to make it a satisfactory experience for everyone, but my point is that the starting point here was never to create a tool for the traveller – this is not ‘for’ the traveller, and it is not even for the individual security guard (who is biding her time sitting behind a stupid screen spotting guns in X-ray pictures) – this is a tool for the ‘organization’ and the organization is more than the individual person.
One of the most fascinating philosophical questions that we can explore through the design of HCI is exactly this: to what extent do we want individual people to conform to the needs of the larger organization – if that’s supposed to be for the greater good (questions that go back to people like Spinoza and others) – or is our ideal society one in which every individual is free from the chains that bureaucracy and state power puts on us. And so on. So this is one complexity we may add to the question of how to design human-centered HCI: do we mean the individual user interacting with the system, or do we mean that complete computer systems should ‘fit’ to the needs of larger societal systems (which may sometimes lead to individual people complaining about having to fill out stupid forms online and so on) – or do we feel there’s a way in which we can make everybody happy. And even if we can do the latter, if we can make it so that we are checked for terrorism and feel perfectly happy and agreeable about it because it is designed to be the perfect individual experience (think: in Disney World even standing in line is fun!) – is that a good thing? Is that desired? Or does it mean we have designed such maliciously deceiving interfaces that the masses are forever soothed, and those in power will remain so forever as their decisions and structures will will no longer be contested – the perfect machine state (think: The Matrix, and so on).
Another thing that Diane says is:
computers are not an independent entity existing in the world, but rather, as I wrote above, a human creation: we have the full power over their functioning and appearance.
Now I think we have to be very precise on what is meant here. It’s a bit like talking about whether we have independent decisions powers apart from what our brain ‘decides’ to do – some say yes, others claim ‘it’s all the brain that is doing it’ and our free will is an illusion. Both of these sides are right in some sense I guess. We can, right here and now, *decide* that it’s us, and not our brains, doing the deciding. This in and of itself proves that it’s not just our brains doing it – because our brains don’t “do” things – our brains are organs in people and it’s people doing things. So the “I am my brain” story is a category error. But on the other hand – we can’t bypass the functioning of our own organs. I cannot simply decide to make my heart beat only on willpower – there have been stories, but it can probably go only so far. Now what does all of this have to do with computers. Well, if computers are truly an extended element and fully enmeshed in human practices – and I think they are – then they are in some sense already so much part of us that we can no longer put ourselves apart from them and think about the one or the other independently. As Diane says: computers are not and independent entity. But that means precisely that we do *not* have fully power over their functioning and appearance, just as we have no ‘full’ power over our own brains.
There’s three ways in which this shows up, I think:
- If you ever tried to build something you wanted to build using ‘computer materials’ you find that, like any material, it’s resistant. The material has constraints that will co-determine what you will ultimately create. Even the most beautiful paintings are not completely constructed in the minds of the painter, they were constructed on the canvas and the properties of the canvas, the paint and the brush helped to shape the image that was finally produced. Design materials work with designers to create the design. We have no ‘full’ power over their functioning and appearance.
- In using computers people will come to adapt themselves to the structure presented to them, and when they evolve skills and ways of being that appropriate these technologies you can no longer make a distinction between the person and the things – they’ve become a rich networked whole. Sometimes people go look for a better tool to do the task. Often they will simply change what they conceive of as their ‘task’ based on what the tool suggests that it should be. And our everyday experience of who we are – what it means to be a human – lies somewhere in between these extremes. This is not necessarily a good thing. Computers can be extremely powerful in making people believe and behave according to the norms and goals implicitly hidden in the way the tool operates. They change us. Just look at who we have grown a little screen to our preferred hand in the past five years. Only the most hard-headed people, with the time on their hands to fight the system, will go against it and really ‘do their own autonomous thing’. Now given the situation we are in now HCD suggest we should design computers ‘for people’ (and not the other way around). This feels good as a basic orientation. But we should not forget that we *already* are shaped by all the technologies that came before the ones we are designing now. We have been shaped by the way books and formal education have framed what knowledge is and learning. We have been shaped in the way we conceive of living and transportation by the concrete structure of cities. We have been shaped by industrialization and machines in the way we conceive of labor and economy. We are always already fully technological beings. So what ‘human’ do we actually want to design for? Perhaps, to be honest, we should be glad we are no longer the ‘real’ fully natural human that would pop out when we put down our technological clothes. Perhaps the only thing left of us will be a racist, raping killing grunting cave-man. We don’t want to design for him, do we?
- Turning back on my earlier point: computers are often reflections of systems, organizational, societal, political systems. Systems existed before there were computers. I think it is very important to critically reflect on these systems. Some systems are utterly bureaucratic and technocratic, and the individual people in them no longer act as people but only as little cogs in the machine. Personally I think this is a bad thing. But it’s not an HCI problem per se. So what we should be addressing is the way individual people are getting caught up in these larger organizational systems up to the point that no single person has the power to break it down – the system has become a power of its own. This happens very quickly – just put together a bunch of 4-year olds and see the social dynamics – it’s all in there in prototype form. So it’s part of being human that apparently we create social interactive patterns that we then become, to some extent, mindless slaves of. My point is that computers form part of this system – they are not the sole cause of it. However in my view they do have a special role in it because computers can fixate such patterns to become hard realities rather than soft constraints. We see this each time an interaction with an institution is being automatized: when we were talking to the lady at the counter, we could still plea our case, and she might believe us, and say well ok, only for this time, I’ll let you through, I’ll do the thing – but next time fill in the form, bring the card, pay the money, and so on. A computerized procedure, however, will never do it – it will always act as if it’s the most ruthless, strict, mean counter-official you’ve ever met – the guard that makes life hell for the prisoners in the extra-extra-extra security ward. This is something we should be working against, in my opinion, but it is not just a matter of interface design rather than one of designing bureaucratic systems at large.