The possibility of dishonest statements and dishonest behaviour is something we all have to be aware of. A particularly interesting question is how to let computers reason with this. This is less far-fetched than it may seem. After all, the Turing test (one of the basic cornerstones of Artificial Intelligence) is based on a computer successfully convincing someone he’s in fact a human. Apart from that, truly intelligent systems may want to have ways of detecting, dealing with and reasoning about the kind of dishonesty they may encounter during some of their interactions with humans.
In the recently published published paper “a formal account of dishonesty” we give a logical formalisation of different types of dishonesty, including lies, bullshit and half-truths. We study the logical properties of these, and provide a number of maxims that we feel every intelligent system that faces the possibility of dishonesty should adhere with.
A technical paper explaining part of the reasoning model underlying the SAsSy demonstrator has been nominated for the best paper award at the 2014 Benelux Conference on Artificial Intelligence (BNAIC). The paper shows the correctness of the some of the proof procedures applied in the demonstrator. Although the actual prize was awarded to another paper of the conference, we feel that the fact that our work was among the nominees acknowledges that we’re on the right track.
For a researcher, it’s always encouraging when people are starting to show interest in one’s research outcome. In this respect, we’re happy to report that one of our team members has been invited to give a speech in Japan, for an audience of fellow scientists who are experts on legal reasoning an automated support tools. All expenses will be paid for by our Japanese hosts. We hope this will be a nice opportunity to exchange some ideas, and possibly for some future cooperation.
Automated computer-based reasoning, advanced as it currently is, can be notoriously difficult to explain to lay persons. One of the things we’re aiming to achieve in the SAsSY project is to come up with new methods for doing so. Here, we have recently booked our first success: we managed to make a connection between one of the currently most popular forms of computational argumentation and a type of reasoning that has been known since antiquity: Socratic discussions. The idea of Socratic discussion is to let a person commit himself to a particular statement, and then slowly guiding him towards a contradiction, basically by trying to force him to make reasoning steps that seem to follow logically from his previous position. It is this type of discussion that one also observes in more modern settings, like critical interview and court cross-examinations. We managed to use this type of discussions as a basis for an interactive (human-computer discussion-based) way of explaining a currently popular principle of argument-based automated reasoning. A paper on this has recently been conditionally accepted for one of the leading scientific journals in computational logic, although we still need to take into account some of the comments of the (anonymous) peer reviewer. We are happy with this result, and hope to report on more results in the near future…
Over the years, a number of mathematical theories has been stated on how to reason in the presence of general principles, guidelines and rules of thumb, that can potentially conflict with each other. We have also seen a neat number of implementations, where these theories have formed the basis for computer software for decision support and such. One particular challenge, however, is for these systems to explain to the end user the correctness of their analysis. After all, the underlying theory is highly mathematical, and not necessarily clear to those not having a background in this kind of maths. My job is therefore to try to transform these theories into something normal people can understand. At the moment, I’m working on explanations that are based on interactive discussions between man and machine that are very similar to the kind of discussions people have with each other.