We are excited to announce that a demo of our system has been accepted for the Intelligent User Interfaces conference!
We will be presenting it in Haifa, Israel in February.
If you want to have a sneak peak, have a look at our video demonstrating the SAsSy system!
On Monday I gave a talk at TechFest titled “Are you talking to me? What to say when you are talking to Robots?”.
I spoke about what autonomous systems are and how they are different from robots. They do a lot of the same things, but
don’t have a physical form like Honda’s helper robot ASIMO, or even a Furby.
They operate independently of people, and can communicate between themselves and us people.
These systems are everywhere, in our tills, satellites, sometimes even in our copy machines (sorry, I meant multi-functional devices!). Some of these systems are pretty powerful, like self-driving cars and drones, and so it’s really important to keep people in the loop. We need to be able to understand what is going on `under the hood’, ask why certain things are happening, and be
able to change or override system behavior when it’s appropriate.
This is where explanations like the ones we are developing in SAsSY come in – we help people understand complex plans,
and the various reasons (and counter-arguments for them, and the counter arguments for the first counter-arguments…) for
why things are done a certain way. I’ve posted the slides online, and feel free to get in touch if you have any questions about the talk.
Thought I’d give you an update on one of the projects I’ve been working on.
One of the high level questions we’re trying to answer is how to present a sequence of actions or a plan. We realized that a person needs to understand WHAT they are supposed to do before they start asking questions about WHY the plan is the way it is.
So, for the last couple of months I’ve been setting up and running experiments on mechanical turk (fondly called mTurk).
MTurk is a crowdsourcing Internet marketplace that enables individuals or businesses (known as Requesters) to co-ordinate the use of human intelligence to perform tasks that computers are currently unable to do.
- Best performing text plan
- One of the interactive plans
The first experiment looked at six ways of presenting a pizza recipe. It had pineapple and banana on it – you can imagine the sort of comments we got on our topping choices! Anyway, we compared three textual plans with different levels of headers and slightly different formatting, and three interactive plans.
We figured that some ways of presenting the plans would help more for answering questions about the recipe, and we measured cognitive load (roughly mental effort) in a few different ways. Surprisingly, one of the textual versions tended to do better!
So we ran another similar experiment with larger plans (125 steps) – perhaps the advantage for interactive plans will come through when there is too much information to take in at once we thought. These plans were about delivering things (televisions, screens, laptops etc) to different locations.
We also looked at using aggregation, or a label like ‘move the object’ to summarize three other actions: load a truck, drive the truck and unload the truck.
We are currently in the process of analysing the results of this second experiment!
When? 19.30-20.30, Monday 23 September 2013
“Are you talking to me? What to say when you are talking to a robot.”
“Robots, mobile phones and computers surround us in our everyday lives. These technologies are continuously advancing, but will there become a time when they are able to make their own decisions and plans without human input? How we communicate with and understand these technologies is becoming increasingly important. Leading researchers from University of Aberdeen’s SAsSy group (www.scrutable-systems.org) share novel research to explain why enabling these technologies to explain their behaviour is vital for the future.”
More about TechFest Aberdeen
We are planning to present some of our current work at the SICSA (Scottish Informatics and Computer Science Alliance) MMI (MultiModal Interaction) Information Retrieval Workshop. The workshop will be held as a one-day event on May 31 2013 at Glasgow Caledonian University. The agenda consists of some interactive sessions, an eminent keynote speaker and some great speakers who will give a flavour of the excellent IR research that is being conducted in SICSA institutions.
You can reserve your place at the workshop here.
Judith Masthoff will be doing a stand-up comedy show at the Bright Club.
Come see her and other funny researchers! (Tickets)
Also there: Susan Morrison (Comic and MC), Ed Patrick (Headliner)
Cafe Drummond, from 20.30 (6£, 4£ concession)
So now that you’ve met the team you might be wondering how the whole project fits together. We’ve put together a short paper (3 pages) that describes the key challenges we are working on. You can download the paper here: paper
We will present it in Exeter on April 5th at a symposium called Do-Form: Enabling Domain Experts to use Formalised Reasoning, April (3-5), which is hosted by the AISB (Society for the Study of Artificial Intelligence and Simulation of Behaviour) convention.