This morning Professor Sethu Vijayakumar – Director of the Institute of Perception, Action and Behaviour – very patiently talked me through the different areas of robotics research undertaken in the forum, and explained how the different expertise in Machine Learning in the department led to a diversity of application domains, each with specific demands and working with varying amounts (and types) of data. The work here in robotics has a focus on embodiment, both in terms of virtual embodiment (avatars, animation and graphics) and actuation systems, such as the I-LIMB Hand™ he’s pictured demonstrating for me.
Before going to talk with Sethu, my attempts to borrow library books had been thwarted by a lack of official staffcard so I’d consoled myself by watching a number of videos of robots online and sketching up outlines for turning grant proposals into quest narratives instead. I was struck by the apparent desire to personify robots by giving them faces, even if those faces were purely decorative, e.g. they have no need for ‘eyes’ (cameras). Sethu explained that anthropomorphising robots was problematic: the more human in appearance you make them, the more people instinctually judge them against human standards and become frustrated with their reactions. This is known as the ‘uncanny valley’ – the space between design and action where human interaction with robots can tip into a instinctual revulsion, based on an innate sense that something is ‘wrong’. It turns out that it’s more effective to portray them as a pet – a dog-like creature, for example – so that there is still an emotive engagement, but with lower expectations of ability (e.g. speed of responses).
This interaction between human & robot is an important one since Sethu’s work looks at exploiting the best of both worlds: human strengths in reactive, contextualised decision making and machine abilities in precision action. In practical terms this means cognitive research into the computational decisions behind human motor control and attempting to replicate their basic principles to make better machines. As multi-purpose machines, we humans are exceptionally efficient – for example, based on matched energy consumption and force a machine cannot match a human’s ability to throw. Our ability to transfer effectively between states – such as stiffening our legs in response to walking on varying surfaces, storing energy temporarily in our tendons and muscles, or moving with grace and confidence – has led to research into developing similarly flexible actuators for machines, which has a variety of potential applications ranging from wind farms to prosthetics.
Humans are irrational – unpredictable, working at less than optimal conditions, ambiguous, and context driven. In one of the robotics labs, Sethu showed me a machine who could play Connect4: block the vision-input sensors (eyes) and the machine can’t play. Move the game-frame away: ditto. Change Connect4 to Chess and you’d have to re-programme from the beginning. While machines can appear to be ‘intelligent’ at a single task – accurate, precise and competent – when multiple contingents come into play which alter their metric of performance (the priority criteria by which the machine’s performance is evaluated as ‘good’ or ‘bad’), then the machine’s inability to predict the irrational is thrown into stark relief. Down in the main robotics laboratory is a football pitch for a team of humanoid robot football players. Playing football allows them to be assessed for hierarchical decision making and multi-agent interaction. They’re programmed through the concept of apprenticeship learning: by looking at behaviour and trying to extract what has been optimised to achieve that behaviour, the decision-making process behind actions can be generalised and scaled between machines of different shapes and sizes.
Which leads me onto the afternoon, when Professor Barbara Webb kindly talked me through her work on Insect Robotics. The reasoning behind studying cognitive behaviour in insects (crickets, fruitflies and desert ants) is that they have between a hundred thousand and a million neurons, making them much simpler than the human brain with its hundred billion-odd neurons. Simpler, but not simple: they are still more than capable of displaying complex behaviours which are replicated across species, thus allowing researchers a more accessible environment in which to study complex processes. The processing power of the insect’s brain is perhaps equal to a standard desktop computer, although the subtleties of connections in the brain contain many unknown factors it is currently impossible to replicate in machines.
The insects are chosen because of the behaviour the display in reaction to specific sensory input: Crickets have a particular response to sound localisation which is translated into their leg movements. Their ears are on their legs, wired in by tracheal tubes. In the little robot cricket prototype pictured, the circuitry has been wired to reflect that relationship. By capturing crickets’ responses to localised sound researchers are able to create a 3D picture (like an animated skeleton) of the response which can be replayed at different speeds and angles. By seeing how insects respond in natural environments, it is possible to develop more effective, intuitive algorithms for machines. The intention is for robots to be able to take decisions based on probability. For example, by tracking the behaviour of desert ants (who navigate by polarised light rather than chemical trails) Barbara hopes to develop machines who are able to situate themselves based on levels of certainty: how certain is the machine about where it is, how will moving a little test that hypothesis and either confirm or disprove it? This will create adaptive, predictive possibilities for navigation in natural environments.

Both my conversation with Barbara and with Sethu pulled up what I see as an interesting conceptual shift in contemporary society which I see mirrored in, for example, post-colonial studies. There’s a shift towards ‘working with’ rather than ‘giving orders’. In this case, a move from deterministic programming to apprenticeship machine learning systems. There’s an awareness of strength in diversity and an attempt towards harnessing the best of all worlds: co-assembly in factories rather than machines replacing humans. I asked Barbara why she’d chosen to produce robot models of the insect behaviours, and her response was that robots have to behave: it makes it easier to look at the wider picture of decision making and behaviour. In simulations it is easy to skip over or ignore problems when they arise (as Sethu had mentioned earlier, in virtual embodiments researchers can choose to suspend gravity, or ignore the effects of friction). Creating machine models makes the research more ‘honest’ to the real word, and therefore more practically applicable. I was intrigued to find that my few preconceptions about the field of robotics seemed relatively inconsequential to the work actually being done, which struck me as a form of translation, especially physical translation, and much more organic than i’d expected.
The day ended with a philosophical debate with Jon over how Informatics was (or was not) like a Hollywood blockbuster disaster movie, the differences between the Everyday Hero and the Historical Hero in a quest narrative, and the importance of risk-taking in research vs. the necessity of proving expertise and competency. It all boiled down to enthusiasm really, and the different possibilities and pitfalls in presenting that personal enthusiasm.
In other news, a happy coincidence: day four of my Leverhulme residency marked the start of notebook no.4. I began numbering my notebooks after I submitted my PhD thesis at the start of March (signalling a new writing era), and this is my fourth – A5, classic black moleskine, about a centimetre thick – since then. Also, I received my first post- an application form for my staffcard – and managed to find a photobooth at lunchtime, so library access should be imminent.