Thought I’d give you an update on one of the projects I’ve been working on.
One of the high level questions we’re trying to answer is how to present a sequence of actions or a plan. We realized that a person needs to understand WHAT they are supposed to do before they start asking questions about WHY the plan is the way it is.
So, for the last couple of months I’ve been setting up and running experiments on mechanical turk (fondly called mTurk).
MTurk is a crowdsourcing Internet marketplace that enables individuals or businesses (known as Requesters) to co-ordinate the use of human intelligence to perform tasks that computers are currently unable to do.
- Best performing text plan
- One of the interactive plans
The first experiment looked at six ways of presenting a pizza recipe. It had pineapple and banana on it – you can imagine the sort of comments we got on our topping choices! Anyway, we compared three textual plans with different levels of headers and slightly different formatting, and three interactive plans.
We figured that some ways of presenting the plans would help more for answering questions about the recipe, and we measured cognitive load (roughly mental effort) in a few different ways. Surprisingly, one of the textual versions tended to do better!
So we ran another similar experiment with larger plans (125 steps) – perhaps the advantage for interactive plans will come through when there is too much information to take in at once we thought. These plans were about delivering things (televisions, screens, laptops etc) to different locations.
We also looked at using aggregation, or a label like ‘move the object’ to summarize three other actions: load a truck, drive the truck and unload the truck.
We are currently in the process of analysing the results of this second experiment!
The Production of Referring Expressions workshop was held as part of the Cognitive Science Society’s conference this year in Berlin. Several of us from SAsSY were involved, and you can see the poster that Kees and I presented here.
One thing that struck me at the conference was that lots of researchers have recently started using an old way of analysing eye movements again in new ways. When people carry out tasks that are cognitively demanding, their pupils dilate and contract more frequently than usual. We can use this information and plan experiments that measure pupil size to figure out how difficult people find it to understand the output of our software. We will use this to help the software present information to people in more easily comprehensible ways. When our results are in, you’ll be able to read about them here on the blog as well as in scientific papers.
For a researcher, it’s always encouraging when people are starting to show interest in one’s research outcome. In this respect, we’re happy to report that one of our team members has been invited to give a speech in Japan, for an audience of fellow scientists who are experts on legal reasoning an automated support tools. All expenses will be paid for by our Japanese hosts. We hope this will be a nice opportunity to exchange some ideas, and possibly for some future cooperation.
Automated computer-based reasoning, advanced as it currently is, can be notoriously difficult to explain to lay persons. One of the things we’re aiming to achieve in the SAsSY project is to come up with new methods for doing so. Here, we have recently booked our first success: we managed to make a connection between one of the currently most popular forms of computational argumentation and a type of reasoning that has been known since antiquity: Socratic discussions. The idea of Socratic discussion is to let a person commit himself to a particular statement, and then slowly guiding him towards a contradiction, basically by trying to force him to make reasoning steps that seem to follow logically from his previous position. It is this type of discussion that one also observes in more modern settings, like critical interview and court cross-examinations. We managed to use this type of discussions as a basis for an interactive (human-computer discussion-based) way of explaining a currently popular principle of argument-based automated reasoning. A paper on this has recently been conditionally accepted for one of the leading scientific journals in computational logic, although we still need to take into account some of the comments of the (anonymous) peer reviewer. We are happy with this result, and hope to report on more results in the near future…