GCAI 2019:Papers with Abstracts

Papers
Abstract. Reverse engineering queries from given data, as in the case of query-by-example and query definability, is an important problem with many applications that has recently gained attention in the areas where symbolic artificial intelligence meets learning. In the presence of ontologies this problem was recently studied for Horn-ALC and Horn-ALCI. The main contribution of this paper is to take a first look at the case of DL-Lite, to identify cases where the addition of the ontology does not increase the worst-case complexity of the problem. Unfortunately, reverse engineering conjunctive queries is known to be very hard, even for plain databases, since the smallest witness query is known to be exponential in general. In the light of this, we outline some possible research directions for exploiting the ontology in order to obtain smaller witness queries.
Abstract. We present a method for answering ontology-mediated queries for DL-Lite extended with a concrete domain, where we allow concrete domain predicates to be used in the query as well. Our method is based on query rewriting, a well-known technique for ontology-based query answering (OBQA), where the knowledge provided by the ontology is compiled into the query so that the rewritten query can be evaluated directly over a database. This technique reduces the problem of query answering w.r.t. an ontology to query evaluation over a database instance. Specifically, we consider members of the DL-Lite family extended with unary and binary concrete domain predicates over the real numbers. While approaches for query rewriting DL-Lite with these concrete domain have been investigated theoretically, these approaches use a combined approach in which also the data is processed, and require the concrete domain values occurring in the data to be known in advance, which makes the procedure data-dependent. In contrast, we show how rewritings can be computed in a data-independent fashion.
Abstract. Temporal dataset evaluation is the problem of establishing to what extent a set of temporal data (histories) complies with a given temporal condition. Checking interval temporal logic formulas against a finite model has been recently proposed, and proved successful, as a tool to solve such a problem. In this paper, we address the problem of checking interval temporal logic specifications, supporting interval length constraints, against infinite, finitely representable models, and we show the applicability of the resulting procedure to the evaluation of incomplete temporal datasets viewed as finite prefixes of ultimately-periodic histories.
Abstract. In the context of computer vision, most of the traditional action recognition techniques assign a single label to a video after analyzing the whole video. We believe that under- standing of the visual world is not limited to recognizing a specific action class or individual object instances, but also extends to how those objects interact in the scene, which im- plies recognizing events happening in the scene. In this paper we present an approach for identifying complex events in videos, starting from detection of objects and simple events using a state-of-the-art object detector (YOLO). We provide a logic based representation of events by using a realization of the Event calculus that allows us to define complex events in terms of logical rules. Axioms of the calculus are encoded in a logic program under Answer Set semantics in order to reason and formulate queries over the extracted events. The applicability of the framework is demonstrated over the scenario of recognizing different kinds of kick events in soccer videos.
Abstract. The multi-agent path finding (MAPF) problem is a combinatorial search problem that aims at finding paths for multiple agents such that no two agents collide with each other. We study a dynamic variant of MAPF, called D-MAPF, which allows changes in the environment (e.g., some existing obstacles may be removed from the environment or moved to some other location, or new obstacles may be included in the environment), and/or changes in the team (e.g., some existing agents may leave and some new agents may join the team) at different times. We introduce a new method to solve D-MAPF, using answer set programming.
Abstract. We study a family of operators (called ‘Tooth’ operators) that combine Description Logic concepts via weighted sums. These operators are intended to capture the notion of instances satisfy- ing “enough” of the concept descriptions given. We examine two variants of these operators: the “knowledge-independent” one, that evaluates the concepts with respect to the current interpretation, and the “knowledge-dependent” one that instead evaluates them with respect to a specified knowledge base, comparing and contrasting their properties. We furthermore discuss the connections between these operators and linear classification models.
Abstract. Projection is the problem of checking whether the execution of a given sequence of actions will achieve its goal starting from some initial state. In this paper, we study a setting where we combine a two-dimensional Description Logic of context (ConDL) with an action formalism. We choose a well-studied ConDL where both: the possible states of a dynamical system itself (object level) and also different context-dependent views on this system state (context level) are organised in relational structures and can be described using usual DL constructs. To represent how such a system and its views evolve we introduce a suitable action formalism. It allows one to describe change on both levels. Furthermore, the observable changes on the object level due to an action execution can also be context- dependent. We show that the formalism is well-behaved in the sense that projection has the same complexity as standard reasoning tasks in case ALCO is the underlying DL.
Abstract. This paper deals with querying ontology-based knowledge bases equipped with non-monotonic rules through a case study within the framework of Cultural Heritage. It focuses on 3D underwater surveys on the Xlendi wreck which is represented by an OWL2 knowledge base with a large dataset. The paper aims at improving the interactions between the archaeologists and the knowledge base providing new queries that involve non-monotonic rules in order to perform qualitative spatial reasoning. To this end, the knowledge base initially represented in OWL2-QL is translated into an equivalent Answer Set Programming (ASP) program and is enriched with a set of non-monotonic ASP rules suitable to express default and exceptions. An ASP query answering approach is proposed and implemented. Furthermore due to the increased expressiveness of non-monotonic rules it provides spatial reasoning and spatial relations between artifacts query answering which is not possible with query answering languages such as SPARQL and SQWRL.
Abstract. We summarise the definitions and some technical aspects of a familiy of description logics that introduce concept constructors (so-called tooth-operators) which, under various constraints, accumulate weights of concepts. We demonstrate how these operators can be fruitfully and elegantly applied to a number of cognitively motivated classification problems which are difficult to handle with the standard expressive means of description logics.
Abstract. In propositional model counting, also named #SAT, the search space needs to be explored exhaustively, in contrast to SAT, where the task is to determine whether a propositional formula is satisfiable. While state-of-the-art SAT solvers are based on non- chronological backtracking, it has also been shown that backtracking chronologically does not significantly degrade solver performance. Hence investigating the combination of chronological backtracking with conflict-driven clause learning (CDCL) for #SAT seems evident. We present a calculus for #SAT combining chronological backtracking with CDCL and provide a formal proof of its correctness.
Abstract. DLS-Forgetter is a reasoning tool that aims to compute restricted views of a knowledge base of first-order logic formulae via semantic forgetting. Semantic forgetting achieves this by eliminating predicate symbols in an equivalence preserving way up to the remain- ing symbols. Forgetting has many applications such as information hiding, explanation generation and computing logical difference. DLS-Forgetter combines ideas from two Ackermann-based approaches: the DLS algorithm and a modal logic inference system inspired by the algorithm. The tool enhances the DLS algorithm by incorporating an ordering over the symbols in the forgetting signature. This allows more control over the forgetting process and the application of the elimination rules. The theory behind the tool is provided by a first-order Ackermann calculus, a first-order generalisation of the rules outlined by the modal logic inference system. The purpose of the tool is to provide the research community with an experimental tool to allow further research to be conducted in the area. This paper describes the DLS-Forgetter tool along with its underlying calculus, it outlines the forgetting process used in the implementation, and presents results of an empirical evaluation.
Abstract. We recall the epistemic logic S5r for reasoning about knowledge under hypotheses and we investigate the extension of the logic with an operator for common knowledge. The logic S5r is equipped with a modal operator of necessity that can be parameterized with hypotheses representing background assumptions while the extension with the common knowledge operator enables us to describe and reason about common knowledge among agents with possibly different background assumptions. We present an axiomatization of the logic and prove Kripke completeness and decidability results.
Abstract. Active Learning is concerned with the question of how to identify the most useful samples for a Machine Learning algorithm to be trained with. When applied correctly, it can be a very powerful tool to counteract the immense data requirements of Artificial Neural Networks. However, we find that it is often applied with not enough care and domain knowledge. As a consequence, unrealistic hopes are raised and transfer of the experimental results from one dataset to another becomes unnecessarily hard.
In this work we analyse the robustness of different Active Learning methods with respect to classifier capacity, exchangeability and type, as well as hyperparameters and falsely labelled data. Experiments reveal possible biases towards the architecture used for sample selection, resulting in suboptimal performance for other classifiers. We further propose the new ”Sum of Squared Logits” method based on the Simpson diversity index and investigate the effect of using the confusion matrix for balancing in sample selection.
Abstract. In this paper we investigate two hypothesis regarding the use of deep reinforcement learning in multiple tasks. The first hypothesis is driven by the question of whether a deep reinforcement learning algorithm, trained on two similar tasks, is able to outperform two single-task, individually trained algorithms, by more efficiently learning a new, similar task, that none of the three algorithms has encountered before. The second hypothesis is driven by the question of whether the same multi-task deep RL algorithm, trained on two similar tasks and augmented with elastic weight consolidation (EWC), is able to retain similar performance on the new task, as a similar algorithm without EWC, whilst being able to overcome catastrophic forgetting in the two previous tasks. We show that a multi-task Asynchronous Advantage Actor-Critic (GA3C) algorithm, trained on Space Invaders and Demon Attack, is in fact able to outperform two single-tasks GA3C versions, trained individually for each single-task, when evaluated on a new, third task—namely, Phoenix. We also show that, when training two trained multi-task GA3C algorithms on the third task, if one is augmented with EWC, it is not only able to achieve similar performance on the new task, but also capable of overcoming a substantial amount of catastrophic forgetting on the two previous tasks.