Call for Empirical Studies and Experiments

The aim of this track is to offer researchers a platform to promote sound empirical evaluation and experimental designs and to report on in-depth studies of frameworks, models, methods and implementations in the field of semantic technologies. A typical submission to this track would have set its focus on the verification of an existing scientific artifact by applying it to a specific task, and present the outcomes of an experiment with a clearly specified data and methods framing. It would include both quantitative (i.e., measuring accuracy, precision, execution times) and qualitative aspects (i.e., to discuss those cases in which the evaluated artifact succeeds and fails at its task). Papers that propose new algorithms and architectures should continue to be submitted to the regular research track.

Papers in this track can fit in different categories:

  • Comparative evaluation studies comparing a spectrum of approaches to a particular problem and, through extensive experiments, providing a comprehensive empirical perspective on a given field. See: Bettina Berendt, Laura Hollink, Vera Hollink, Markus Luczak-Rösch, Knud Möller, David Vallet: Usage analysis and the web of data. SIGIR Forum 45(1): 63-69 (2011).
  • Studies analyzing individual or social phenomena related to the Semantic Web, including investigations of existing Social Semantic Web systems and technologies and social or behavioural processes related to activities in the data management life cycle. Example: Ciro Cattuto, Dominik Benz, Andreas Hotho, Gerd Stumme: Semantic Grounding of Tag Relatedness in Social Bookmarking Systems. International Semantic Web Conference 2008: 615-631.
  • Analyses of experimental results providing insights on the nature or characteristics of studied phenomena, including negative results. See: Heiner Stuckenschmidt, Michael Schuhmacher, Johannes Knopp, Christian Meilicke and Ansgar Scherp: On the Status of Experimental Research on the Semantic Web. In Proceedings of ISWC 2013, Sydney, Australia, pp. 591-606. Springer, 2013.
  • Result verification focusing on validating or refuting published results and, through the renewed analysis, help advancing the state of the art. Example: Jens Dittrich, Lukas Blunschi, and Marcos Antonio Vaz Salles. 2008. Dwarfs in the rearview mirror: how big are they really?. VLDB Endow. 1, 2 (August 2008).
  • Benchmark design and their application to evaluate and compare semantic technologies. Example: Schmidt, Michael, et al. "Fedbench: A benchmark suite for federated semantic data query processing." The Semantic Web–ISWC 2011. Springer Berlin Heidelberg, 2011. 585-600.
  • Development of new evaluation methodologies, and their demonstration in an experimental study. Example: Natalya Fridman Noy, Paul R. Alexander, Rave Harpaz, Patricia L. Whetzel, Ray W. Fergerson, Mark A. Musen: Getting Lucky in Ontology Search: A Data-Driven Evaluation Framework for Ontology Ranking. International Semantic Web Conference (1) 2013: 444-459.

Review Criteria

Papers will be assessed according to the following criteria:

  • Reproducibility, including precise descriptions of the experimental conditions and datasets, and the ability to share the data with the general public. Experiment design will have to be extensively described so that the results could be independently reproduced, counter-experiments designed and subsequent work enabled to expand on this line of work. Public availability of the datasets used should be the norm, unless there are valid privacy or other types of concerns, which should be mentioned in the paper.
  • Applicability and generality of the methodologies devised, or of the experiment findings to other areas and types of problems
  • Validity of the evaluation methodology (size of dataset, significance tests) including an honest discussion of threats to validity.

Topics of Interest

For a list of topic of interests please visit the Call for Research Papers.

Submission

  • Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via the EasyChair conference submission system https://www.easychair.org/conferences/?conf=iswc2015evaluation.
  • All research submissions must be in English, and no longer than 16 pages. Papers that exceed this limit will be rejected without review. Submissions must be in PDF formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. ISWC-2015 submissions are not anonymous.
  • Authors of accepted papers will be required to provide semantic annotations  for the abstract of their submission, which will be made available on the conference web site. Details will be provided at the time of acceptance.
  • Accepted papers will be distributed to conference attendees and also published by Springer in the printed conference proceedings, as part of the Lecture Notes in Computer Science series. At least one author of each accepted paper must register for the conference and present the paper there.

Prior Publication and Multiple Submissions

ISWC 2015 will not accept research papers that, at the time of submission, are under review for or have already been published in or accepted for publication in a journal or another conference. The conference organizers may share information on submissions with other venues to ensure that this rule is not violated.

Important Dates

  • Abstracts: April 23rd, 2015
  • Full Paper Submission: April 30th, 2015
  • Author Rebuttals: June 1st-3rd, 2015
  • Notifications: June 20th, 2015
  • Camera-Ready Versions: July 18th, 2015

All deadlines are Hawaii time.

Track Chairs

Elena Simperl
Markus Strohmaier

Program Committee

The list of program committee members can be found here.