Despite the development of several ontology reasoning optimizations, the traditional methods either do not scale well or only cover a subset of OWL 2 language constructs. As an alternative, neuro-symbolic approaches are gaining significant attention. However, the existing methods still can not deal with very expressive ontology languages. To find and improve these performance bottlenecks of the reasoners, we ideally need several real-world ontologies that span the broad spectrum in terms of their size and expressivity. However, that is often not the case. One of the potential reasons for the ontology developers to not build ontologies that vary in terms of size and expressivity is the performance bottleneck of the reasoners. This challenge includes three tasks that aim to deal with this chicken and egg problem.
- Task-1 - Submit a real-world ontology that is a challenge in terms of the reasoning time or memory consumed during reasoning. We will be evaluating the submitted ontologies based on the time and the memory consumed for a reasoning task, such as classification.
- Task-2 - Submit a description logic reasoner that makes use of traditional techniques such as tableau algorithms and saturation rules. We will evaluate the performance and the scalability of the submitted systems on the datasets based on the time taken and memory consumed on the ontology classification task. This will provide an insight into the progress in the development of reasoners since the last reasoner evaluation challenge (ORE 2015).
- Task-3 - Submit an ontology/RDFS reasoner that makes use of neuro-symbolic techniques for reasoning and optimization. We will be evaluating two types of neuro-symbolic systems: (a) that approximate the entailment reasoning for addressing the time complexity problem, or (b) predicting missing and plausible axioms for completion. We will evaluate the submitted systems on the test datasets based on the time taken, memory consumed, precision and recall. (Please check the references for some samples of the work that fall in this category).
This challenge will be collocated with the 20th International Semantic Web Conference.
We have a discussion group for the challenge where we share the latest news with the participants and discuss issues related to the evaluation rounds.
ScheduleSee full ISWC'21 program here
October 27, Session 4D (EDT (US): 10:20-11:20. CET (EU): 16:20-17:20. CST (China): 22:20-23:20):
- Challenge overview & announcement of awards - 10 min. live.
- A Reasoner-Challenging Ontology from the Microelectronics Domain (presented by Frank Wawrzik). 5min. recorded.
- Reasoning Challenges on Gene Variants Data (presented by Asha Subramanian). 5min. recorded.
- CaLiGraph Ontology (presented by Nicolas Heist). 5min. recorded.
- DACOC3 (presented by Johannes Frey). 5min. recorded.
- EmELvar (presented by Biswesh Mohapatra). 5min. recorded.
- Query Answering and Scaling Extensions of Konclude (presented by Andreas Steigmiller). 5min. recorded.
- QA and wrap-up - 12min live.
SemREC Poster Sessions:
SemREC will be present during the ISWC Posters & Demos/Social sessions. We will use wonder.me together with the other ISWC Semantic Web challenges.
- Oct 26, 18:50-19:20 CET
- Oct 27, 18:30-19:10 CET
- Oct 28, 15:00-15:30 CET
- A Reasoner-Challenging Ontology from the Microelectronics Domain
- Reasoning Challenges on Gene Variants Data
- CaLiGraph Ontology
Challenge EvaluationsThe submitted ontologies and reasoners were re-evaluated by the challenge organisers. See the details of evaluations here
The submitted reasoning systems can vary in terms of their support for different OWL 2 profiles, subsets of Description Logics, reasoning tasks (such as classification, realization, or consistency checking for traditional reasoners, entailment, class membership, class subsumption, or axiom completion for neuro-symbolic reasoners). Ideally, the datasets must cover all these cases. However, due to the uncertainty with regard to the type of system submissions-- given the challenge is being organized for the first time, we are providing the OWL 2 Profile specific datasets only. If there are systems that partially support some profile then the participants can evaluate on part of the provided datasets.
Participants are requested to make a manuscript submission describing their entry.
For Task 1, we expect a detailed description of the ontology along with the analysis of the reasoning performance, the workarounds, if any, that were used to make the ontology less challenging (for example, dropping of a few axioms, redesigning the ontology, etc.), and the (potential) applications in which the ontology could be used.
For Tasks 2 and 3, we expect a detailed description of the system, including evaluating the system on the provided datasets.
For Task 2, having a link to the code repository in the paper is sufficient. Please make sure that there are clear instructions to build and run the code. In addition to that and especially in cases where it is not possible to share the code, it would be very helpful to us if the binary/executable is also made available to us (as supplementary material or as part of the code repository). We plan to evaluate the submitted systems on a Linux-based CPU server.
For Task 3, we provide an eval.py file for the subsumption task. This is provided only to give an idea of the kind of submission we expect from the participants. Participants are requested to make the changes mentioned in the file to evaluate it on their embeddings for the supported reasoning task (e.g., class subsumption, class membership, etc). We would require the class embeddings of your model along with a readme on the changes made on the evaluation file and how to use it. We plan to evaluate the submitted systems on a Linux-based GPU server.
The submissions can be either in the form of short papers of length 5 pages or long papers of length 10-12 pages. All the submissions must be in English and follow the 1-column CEUR-ART style (overleaf template). The proceedings will be published as a volume of CEUR-WS. Submissions should be made in the form of a pdf document on EasyChair.