ISWC 2005 Logo Industry Day - The Information Juggernaut

ignore

Research / Academic Track -

Fourth International Semantic Web Conference (ISWC 2005)

November 6 –10, 2005
Radisson SAS Hotel
Galway, Ireland

http://iswc2005.semanticweb.org

List of Accepted Papers

Title
A Bayesian Network Approach to Ontology Mapping
Author(s)
Rong Pan, Zhongli Ding, Yang Yu, Yun Peng
Abstract
This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BN. Probabilities needed for constructing conditional probability tables (CPT) during translation and for measuring semantic similarity during mapping are learned using text classification techniques where each concept in an ontology is associated with a set of semantically relevant text documents, which are obtained by ontology guided web mining. The basic ideas of this approach are validated by positive results from computer experiments on real-world ontologies.
 
Title
A Framework for Handling Inconsistency in Changing Ontologies
Author(s)
Peter Haase, Frank van Harmelen, Zhisheng Huang, Heiner Stuckenschmidt, York Sure
Abstract
One of the major problems of large scale, distributed and evolving ontologies is the potential introduction of inconsistencies. In this paper we survey four different approaches to handling inconsistency in DL-based ontologies: consistent ontology evolution, repairing inconsistencies, reasoning in the presence of inconsistencies and multi-version reasoning. We present a common formal basis for all of them, and use this common basis to compare these approaches. We discuss the different requirements for each of these methods, the conditions under which each of them is applicable, the knowledge requirements of the various methods, and the different usage scenarios to which they would apply.
 
Title
A General Diagnosis Method for Ontologies
Author(s)
Gerhard Friedrich, Kostyantyn Shchekotykhin
Abstract
The effective debugging of ontologies is an important prerequisite for their successful application and impact on the semantic web. The heart of this debugging process is the diagnosis of faulty knowledge bases. In this paper we define general concepts for the diagnosis of ontologies. Based on these concepts, we provide correct and complete algorithms for the computation of minimal diagnoses of knowledge bases. These concepts and algorithms are broadly applicable since they are independent of a particular variant of an underlying logic (with monotonic semantics) and independent of a particular reasoning system. The practical feasibility of our method is shown by extensive test evaluations.
 
Title
A Large Scale Taxonomy Mapping Evaluation
Author(s)
Paolo Avesani, Fausto Giunchiglia, Mikalai Yatskevich
Abstract
Matching hierarchical structures, like taxonomies or web directories, is the premise for enabling interoperability among heterogenous data organizations. While the number of new matching solutions is increasing the evaluation issue is still open. This work addresses the problem of comparison for pairwise matching solutions. A methodology is proposed to overcome the issue of scalability. A large scale dataset is developed based on real world case study namely, the web directories of Google, Looksmart and Yahoo!. Finally, an empirical evaluation is performed which compares the most representative solutions for taxonomy matching. We argue that the proposed dataset can play a key role in supporting the empirical analysis for the research effort in the area of taxonomy matching.
 
Title
A Little Semantic Web Goes a Long Way in Biology
Author(s)
Katy Wolstencroft, Andy Brass, Ian Horrocks, Phillip Lord, Ulrike Sattler, Robert Stevens, Daniele Turi
Abstract
We show how state of the art Semantic Web technology can be used in e-Science, in particular to automate the classification of proteins in biology. We show that the resulting classification was of comparable quality to one performed by a human expert, and how investigations using the classified data even resulted in the discovery of significant information that had previously been overlooked, leading to the identification of a possible drug-target.
 
Title
A Method to Combine Linguistic Ontology-Mapping Techniques
Author(s)
Willem Robert von Hage
Abstract
We discuss four linguistic ontology-mapping techniques and evaluate them on real-life ontologies in the domain of food. Furthermore we propose a method to combine ontology-mapping techniques with high Precision and Recall to reduce the necessary amount of manual labor and computation.
 
Title
A Strategy for Automated Meaning Negotiation in Distributed Information Retrieval
Author(s)
Vadim Ermolayev, Natalya Keberle, Wolf-Ekkehard Matzke, Vladimir Vladimirov
Abstract
The paper reports on the formal framework to design strategies for multiissue non-symmetric meaning negotiations among software agents in a distributed information retrieval system. The advancements of the framework are the following. A resulting strategy compares the contexts of two background domain theories not concept by concept, but the whole context (conceptual graph) to the other context by accounting the relationships among concepts, the properties and the constraints over properties. It contains the mechanisms for measuring contextual similarity through assessing propositional substitutions and to provide argumentation through generating extra contexts. It uses presuppositions for choosing the best similarity hypotheses and to make the mutual concession to common sense monotonic. It provides the means to evaluate the possible eagerness to concede through semantic commitments and related notions of knowledgeability and degree of reputation
 
Title
A String Metric for Ontology Alignment
Author(s)
Giorgos Stoilos, Giorgos Stamou, Stefanos Kollias
Abstract
Ontologies are today a key part of every knowledge based system. But the variety of ways that a domain can be conceptualized results in the creation of different ontologies with contradicting or overlapping parts. For this reason ontologies need to be brought into mutual agreement (aligned). One important method for ontology alignment is the comparison of class and property names of ontologies using string-distance metrics. Today quite a lot of such metrics exist in literature. But all of them have been initially developed for different applications and fields resulting in poor performance when applied in this new domain. In the current paper we present a new string metric for the comparison of names which performs better on the process of ontology alignment as well as to many other field matching problems.
 
Title
A Template-based Markup Tool for Semantic Web Content
Author(s)
Brian Kettler, James Starz, William Miller, Peter Haglich
Abstract
The Intelligence Community, among others, is increasingly using document metadata to improve document search and discovery on intranets and extranets. Document markup is still often incomplete, inconsistent, incorrect, and limited to keywords via HTML and XML tags. OWL promises to bring semantics to this markup to improve its machine understandability. A usable markup tool is becoming a barrier to the more widespread use of OWL markup in operational settings. This paper describes some of our attempts at building markup tools, lessons learned, and our latest markup tool, the Semantic Markup Tool (SMT). SMT uses automatic text extractors and templates to hide ontological complexity from end users and helps them quickly specify events and relationships of interest in the document. SMT automatically generates correct and consistent OWL markup. This comes at a cost to expressivity. We are evaluating SMT on several pilot semantic web efforts.
 
Title
An ontological framework for dynamic coordination
Author(s)
Valentina Tamma, Chris van Aart, Thierry Moyaux, Shamima Paurobally, Ben Lithgow Smith, Michael Wooldridge
Abstract
Coordination is the process of managing the possible interactions between activities and processes; a mechanism to handle such interactions is known as a coordination regime. A successful coordination regime will prevent negative interactions occurring e.g., by preventing two processes from simultaneously accessing a non-shareable resource), and wherever possible will facilitate positive interactions (e.g., by ensuring that activities are not needlessly duplicated). We start from the premise that effective coordination mechanisms require the sharing of knowledge about activities, resources and their properties, and hence, that in a heterogeneous environment, an ontological approach to coordination is appropriate. After surveying recent work on dynamic coordination, we describe an ontology for coordination that we have developed with the goal of coordinating semantic web processes.We then present a implementation of our ideas, which serves as a proof of concept for how this ontology can be used for dynamic coordination. We conclude with a summary of the presented work, illustrate its relation to the Semantic Web, and provide insights into future extensions.
 
Title
Automatic Evaluation of Ontologies (AEON)
Author(s)
Johanna Voelker, Denny Vrandecic, York Sure
Abstract
OntoClean is a unique approach towards the formal evaluation of ontologies, as it analyses the intensional content of concepts. Although it is well documented and explained in numerous publications, and its importance is widely acknowledged, it is nevertheless used rather infrequently due to the high costs for applying OntoClean, especially on tagging concepts with the correct metaproperties. In order to facilitate the use of OntoClean and to enable proper evaluation of it in real-world cases, we provide AEON, a tool which automatically tags concepts with appropriate OntoClean meta-properties. The implementation can be easily expanded to check the concepts for other abstract meta-properties, thus providing for the first time tool support in order to enable intensional ontology evaluation for concepts. Our main idea is using the web as an embodiment of objective world knowledge, we search for patterns indicating concepts meta-properties.We get an automatic tagging of the ontology, thus reducing costs tremendously. Moreover, AEON lowers the risk of having subjective taggings. As part of the evaluation we report our experiences from creating a middle-sized OntoClean-tagged reference ontology.
 
Title
Benchmarking Database Representations of RDF/S Stores
Author(s)
Yannis Theoharis, Vassilis Christophides, Grigoris Karvounarakis
Abstract
Abstract. In this paper we benchmark three popular database representations of RDF/S schemata and data: (a) a schema-aware (i.e., one table per RDF/S class or property) with explicit (ISA) or implicit (NOISA) storage of subsumption relationships, (b) a schema-oblivious (i.e., a unique table with triples of the form ), using (ID) or not (URI) identifiers to represent a resource URI on the subject of each triple and (c) a hybrid of the schema-aware and schema-oblivious representations (i.e., one table per RDF/S meta-class by distinguishing also the range type of properties). Furthermore, we benchmark two common approaches for evaluating taxonomic queries either on-the-fly (ISA, NOISA, Hybrid), or by precomputing the transitive closure of subsumption relationships (Mat View, URI, ID). The main conclusion drawn from our experiments is that the evaluation of taxonomic queries is most efficient over RDF/S stores utilizing the Hybrid and Mat View representations. Of the rest, schema-aware representations (ISA, NOISA) exhibit overall better performance than URI, which is superior to that of ID, which exhibits the overall worst performance.
 
Title
Bootstrapping Ontology Alignment Methods with APFEL
Author(s)
Marc Ehrig, Steffen Staab, York Sure
Abstract
Ontology alignment is a prerequisite in order to allow for interoperation between different ontologies and many alignment strategies have been proposed to facilitate the alignment task by (semi)automatic means. Due to the complexity of the alignment task, manually defined methods for (semi)automatic alignment rarely constitute an optimal configuration of substrategies from which they have been built. In fact, scrutinizing current ontology alignment methods, one may recognize that most are not optimized for given ontologies. Some few include machine learning for automating the task, but their optimization by machine learning means is mostly restricted to the extensional definition of ontology concepts. With APFEL (Alignment Process Feature Estimation and Learning) we present a machine learning approach that explores the user validation of initial alignments for optimizing alignment methods. The methods are based on extensional and intensional ontology definitions. Core to APFEL is the idea of a generic alignment process, the steps of which may be represented explicitly. APFEL then generates new hypotheses for what might be useful features and similarity assessments and weights them by machine learning approaches. APFEL compares favorably in our experiments to competing approaches.
 
Title
BRAHMS: A workBench RDF store And High performance Memory System for Semantic Association Discovery
Author(s)
Maciej Janik, Krzysztof Kochut
Abstract
Discovery of semantic associations in Semantic Web ontologies is an important task in various analytical activities. Several query languages and storage systems have been designed and implemented for storage and retrieval of information in RDF ontologies. However, they are inadequate for semantic association discovery. In this paper we present the design and implementation of BRAHMS, an efficient RDF storage system, specifically designed to support fast semantic association discovery in large RDF bases. We present memory usage and timing results of several tests performed with BRAHMS and compare them to similar tests performed using Jena, Sesame, and Redland, three of the well-known RDF storage systems. Our results show that BRAHMS handles basic association discovery well, while the RDF query languages and even the low-level APIs in the other three tested systems are not suitable for the implementation of semantic association discovery algorithms.
 
Title
Choreography in IRS-III – Coping with Heterogeneous Interaction Patterns in Web Services
Author(s)
John Domingue, Stefania Galizia, Liliana Cabral
Abstract
In this paper we describe how we handle heterogeneity in web service interaction through the choreography mechanism that we have developed for IRS-III. IRS-III is a framework and platform for developing semantic web services which utilizes the WSMO ontology. The overall design of our choreography framework is based on: the use of state, differentiating between communication direction and which actor has the initiative, having representations which can be executed, a formal semantics, and the ability to suspend communication. Our framework has a full implementation which we illustrate through an example application.
 
Title
Combining RDF and OWL with Rules: Semantics, Decidability, Complexity
Author(s)
Herman ter Horst
Abstract
This paper extends the model theory of RDF with rules, putting emphasis on integration with OWL and decidability of entailment. We start from an abstract syntax that views a rule as a pair of rule graphs, which generalize RDF graphs by also allowing rule variables in subject, predicate, and object positions. In the model theory we make no restrictive assumptions, thereby extending the metamodeling capabilities of RDFS; in particular, classes and properties can be viewed as instances. We integrate RDFS as well as a decidable, intensional variant of OWL which weakens OWL Full and for which a complete set of simple entailment rules is available. Almost all examples in the DAML set of test rules are covered by our approach.

For a set of rules R, we define a general notion of R-entailment. Extending earlier results on entailment for RDFS and OWL, we prove a general completeness result for R-entailment. We show that a restricted form of application of rules that introduce blank nodes is sufficient for determining R-entailment. Under the assumption that rules do not introduce blank nodes, we prove that R-entailment is decidable and in NP, while R-consistency is in P. Under the additional assumption that there is no blank node in the target RDF graph, we show that R-entailment is in P.

 
Title
Constructing Complex Semantic Mappings between XML Data and Ontologies
Author(s)
Yuan An, Alex Borgida, John Mylopoulos
Abstract
Much data is published on the Web in XML format satisfying schemas, and to make the Semantic Web a reality, such data needs to be interpreted with respect to ontologies. Interpretation is achieved through a semantic mapping between the XML schema and the ontology. We present work on the heuristic construction of complex such semantic mappings, between XML schemas and ontologies when given an initial set of simple correspondences from XML schema attributes to datatype properties in the ontology. To accomplish this, we first offer a mapping formalism to capture the semantics of XML schemas. Second, we present our heuristic mapping construction algorithm. Finally, we show through an empirical study that considerable effort can be saved when constructing complex mappings by using our prototype tool.
 
Title
Containment and Minimization of RDF/S Query Patterns
Author(s)
Giorgos Serfiotis, Ioanna Koffina, Vassilis Christophides, Val Tannen
Abstract
Semantic query optimization (SQO) has been proved to be quite useful in various applications (e.g., data integration, graphical query generators, caching, etc.) and has been extensively studied for relational, object, and XML databases. However, less attention to SQO has been devoted in the context of the Semantic Web. In this paper we present sound and complete algorithms for the containment and minimization of RDF/S query patterns. More precisely, we consider two RDF/S query fragments supporting pattern matching at the data, but also, at the schema levels. To this end we advocate a logic framework for capturing the RDF/S data model and semantics and we employ well-established techniques proposed in the relational context, in particular, the chase and backchase algorithms.
 
Title
Debugging OWL-DL Ontologies: A Heuristic Approach
Author(s)
Hai Wang, Matthew Horridge, Alan Rector, Nick Drummond, Julian Seidenberg
Abstract
After becoming a W3C Recommendation, OWL is becomming increasingly widely accepted and used. However most people still find it difficult to create and use OWL ontologies. On major difficulty is ``debugging'' the ontologies - discovering why a reasoners has inferred that a class is ``unsatisfiable'' (inconsistent). Even for people who do understand OWL and the logical meaning of the underlining description logic, discovering why concepts are unsatisfiable can be difficult. Most modern tableaux reasoners do not provide any explanation as to why the classes are unsatisfiable. This paper presents a `black boxed' heuristic approach based on identifying common errors and inferences.
 
Title
Decentralized Case-Based Reasoning for the semantic Web
Author(s)
Mathieu d'Aquin, Jean Lieber and Amedeo Napoli
Abstract
Decentralized case-based reasoning (DzCBR) is a reasoning framework that addresses the problem of adaptive reasoning in a multi-ontology environment. It is a case-based reasoning (CBR) approach which relies on contextualized ontologies in the C-OWL formalism for the representation of domain knowledge and adaptation knowledge. A context in C-OWL is used to represent a particular viewpoint, containing the knowledge needed to solve a particular local problem. Semantic relations between contexts and the associated reasoning mechanisms allow the CBR process in a particular viewpoint to reuse and share information about the problem and the already found solutions in the other viewpoints.
 
Title
Finding and Ranking Knowledge on the Semantic Web
Author(s)
Li Ding, Rong Pan, Tim Finin, Anupam Joshi, Yun Peng, Pranam Kolari
Abstract
Swoogle is a system that helps knowledge engineers and software agents find knowledge on the web encoded in the semantic web languages RDF and OWL. Based on the search mechanisms provided in the previous version, we add two new features, namely the semantic web navigation model and some mechanisms for ranking the semantic web at various granularities. Although the semantic web is materialized on the Web, it is hard to navigate within the semantic web since few explicit "hyperlinks" are available besides an URIref's namespace or owl:import semantic. Hence we propose a navigation model that characterizes users' navigational behavior (e.g. surfing from an ontology to one class C defined in it, and then to the RDF documents that populates C or the other resources that helps defining this class) within the semantic web and implement it in Swoogle's "Ontology Dictionary". Based on this navigation model and the metadata collected in Swoogle, we developed some algorithms for ranking objects in the semantic web at various granularities including semantic web document (SWD), term (e.g., RDF class or property) and fact (i.e., RDF triple). Ranking SWDs, inspired by the Google's PageRank, emulates an "rational" agent acquiring knowledge on the semantic web using the hyperlinks provided by our "semantic web navigation model" at document level. Ranking individual terms extends ranking to a finer granularity. For example, from the hundreds of RDF terms denoting the concept of a person, the question of "which are most widely used?" is answered by term ranking. Finally, we introduce the notion of ranking facts (e.g., RDF triples) such as the rdfs:domain relation between a class and a property using provenance based heuristics. These ranking mechanisms, if being used, could help the emergence of consensus ontologies. Experiments show that the Swoogle search engine using "semantic ranking" outperforms Google in evaluating the importance of ontologies.
 
Title
Graph-based inferences in a Semantic Web Server for the Cartography of Competencies in a Telecom Valley
Author(s)
Fabien Gandon, Olivier Corby, Alain Giboin, Nicolas Gronnier, Cecile Guigard
Abstract
We introduce an experience in building a public semantic web server maintaining annotations about the actors of a Telecom Valley. We then focus on an example of inference used in building one type of cartography of the competences of the economic actors of the Telecom Valley. We detailed how this inference exploits the graph model of the semantic web using ontology-based metrics and conceptual clustering. We prove the characteristics of theses metrics and inferences and we give the associated interpretations.
 
Title
Guidelines for Evaluating the Performance of Ontology Management APIs
Author(s)
Raul Garcia-Castro, Asuncion Gomez-Perez
Abstract
Ontology tools performance and scalability are critical to both the growth of the Semantic Web and the establishment of these tools in the industry. In this paper, we use the benchmarking methodology developed in the Knowledge Web Network of Excellence for improving the performance and the scalability of ontology development tools. We focus on the definition of a general infrastructure for evaluating the performance of these tools' ontology management APIs in terms of its execution efficiency, and present the results of applying the methodology for evaluating the API of the WebODE ontology engineering workbench.
 
Title
Information Modeling for End to End Composition of Semantic Web Services
Author(s)
Arun Kumar, Biplav Srivastava, Sumit Mittal
Abstract
One of the main goals of the semantic web services effort is to enable automated composition of web services. An end to end view of the service composition process involves automation of composite service creation, development of executable workflows and deployment on an execution environment. However, the main focus in literature has been on the initial part of formally representing web service capabilities and reasoning about their composition using AI techniques. Based upon our experience in building an end to end composition tool for application integration in industrial setting, we bring out issues that have an impact on information modeling aspects of the composition process. In this paper, we present pragmatic solutions for problems relating to scalability and manageability of service descriptions and data flow construction for operationalizing the composed services.
 
Title
Introducing autonomic behaviour in semantic web agents
Author(s)
Valentina Tamma, Ian Blacoe, Ben Lithgow Smith, Michael Wooldridge
Abstract
This paper presents SERSE -- SEmantic Routing SystEm-- a distributed multi-agent system composed of specialised agents that provides robust and efficient gathering and aggregation of digital content from diverse resources. The agents composing SERSE use ontological descriptions to search and retrieve semantically annotated knowledge sources, by maintaining a semantic index of the instances of the annotation ontology. The efficient retrieval is made it possible through the semantic routing mechanism, that permits to identify the agent indexing the resources requested by a user query without having to maintain a central index, and by reducing the number of messages broadcasted to the system. The system is also capable of exhibiting autonomic behaviour. Autonomic behaviour is characterised by self configuration and self healing capabilities, aimed at permitting the system to manage the failure of one of its agents and ensure continuous functioning.
 
Title
On Applying the AGM Theory to DLs and OWL
Author(s)
Giorgos Flouris, Dimitris Plexousakis, Grigoris Antoniou
Abstract
It is generally acknowledged that any Knowledge Base (KB) should be able to adapt itself to new information received. This problem has been extensively studied in the field of belief change, the dominating approach being the AGM theory. This theory set the standard for determining the rationality of a given belief change mechanism but was placed in a certain context which makes it inapplicable to logics used in the Semantic Web, such as Description Logics (DLs) and OWL. We believe the Semantic Web community would benefit from the application of the AGM theory to such logics. This paper is a preliminary study towards the feasibility of this application. Our approach raises interesting theoretical challenges and has an important practical impact too, given the central role that DLs and OWL play in the Semantic Web.
 
Title
On Logical Consequence for Collections of OWL Documents
Author(s)
Yuanbo Guo, Jeff Heflin
Abstract
In this paper, we investigate the (in)dependence among OWL documents with respect to the logical consequence when they are combined, in particular the inference of concept and role assertions about individuals. One the one hand, we present a systematic approach to identifying those documents that affect the inference of a given fact. On the other hand, we consider ways for fast detection of independence. First, we demonstrate several special cases in which two documents are independent of each other. Secondly, we introduce an algorithm for checking the independence in the general case. In addition, we describe two applications in which the above results have allowed us to develop novel approaches to overcome some difficulties with reasoning on large scale OWL data. Both applications demonstrate the usefulness of this work for improving the scalability of a practical Semantic Web system that relies on the reasoning about individuals.
 
Title
On Partial Encryption of RDF-Graphs
Author(s)
Mark Giereth
Abstract
In this paper, we propose a new method to partially encrypt RDF-graphs. The idea is to encrypt selected fragments of an RDF-graph for a set of recipients while all other parts remain publicly readable. The result of the encryption is an RDF-compliant self-describing graph containing both, encrypted data and plaintexts. For the representation of encrypted data and encryption metadata, the XML-Encryption and XML-Signature standards are used. For fragment selection and the specification of encryption policies, the RDQL query language is adapted. The proposed method allows a fine-grained access control of data published on the Semantic Web and could be the basis for new business models.
 
Title
On the Properties of Metamodeling in OWL
Author(s)
Boris Motik
Abstract
A common practice in conceptual modeling is to divide the model into an intensional and an extensional part. Although very intuitive, this approach falls short in many complex domains, where the borderline between the two is not clearcut. Therefore, OWL-Full, the most expressive of the Semantic Web ontology languages, allows mixing the intensional and the extensional model by a feature we refer to as metamodeling. Until now, the computational properties of metamodeling in OWL-Full have been unknown. Here, we show that the semantics of metamodeling adopted in OWL-Full leads to undecidability of basic inference problems. We analyze this result and show that it is due to free mixing of logical and metalogical symbols. Moreover, we propose two alternatives: the contextual and the HiLog semantics, and show that SHIQ -- a description logic underlying OWL -- under either semantics is decidable. Finally, we discuss the expressivity of these semantics.
 
Title
Ontologies are us: A unified model of social networks and semantics
Author(s)
Peter Mika
Abstract
We extend the traditional bipartite model of ontologies with the social dimension, leading to a tripartite model of actors, concepts and instances. We demonstrate the application of this representation by showing how community-based semantics emerges from this model through a process of graph transformation. We illustrate ontology emergence by two case studies, an analysis of a large scale folksonomy system and a novel method for the extraction of community-based ontologies from Web pages.
 
Title
Ontology Change Detection using a Version Log
Author(s)
Peter Plessers, Olga De Troyer
Abstract
Alterations in a domain, changes of user requirements, or corrections of design flaws, all may induce changes to the corresponding ontologies of the Semantic Web. In this article, we propose a new ontology evolution approach that combines a top-down and a bottom-up approach. This means that the manual request for changes (top-down) by the ontology engineer is complemented with an automatic change detection mechanism (bottom-up). The approach is based on keeping track of the different versions of ontology concepts throughout their lifetime (called virtual versions). In this way, changes can be defined in terms of these virtual versions.
 
Title
Ontology Design Patterns for Semantic Web Content
Author(s)
Aldo Gangemi
Abstract
The lifecycle of ontologies over the Semantic Web involves several different techniques. In this paper I propose a frame for introducing ontology design patterns that facilitate or improve those techniques. Some distinctions are drawn between kinds of ontology design patterns. Some content oriented patterns are presented in order to illustrate their utility at different degrees of abstraction, and how they can be specialized or composed. The proposed frame and the initial set of patterns are designed in order to function as a pipeline connecting domain modelling, user requirements, and ontology-driven tasks/queries to be executed.
 
Title
Ontology Mapping Discovery with Uncertainty
Author(s)
Prasenjit Mitra, Natasha Noy, Anuj R. Jaiswal
Abstract
Resolving semantic heterogeneity among information sources is a central problem in information interoperation, information integration, and information sharing among websites. Ontologies express the semantics of the terminology used in these websites. Semantic heterogeneity can be resolved by mapping ontologies from diverse sources. Mapping large ontologies manually is almost impossible and results in a number of errors of omission and commission. Therefore, automated ontology mapping algorithms are a must. However, most existing ontology mapping tools do not provide exact mappings. Rather, there is usually some degree of uncertainty.We describe a framework to improve existing ontology mappings using a Bayesian Network. Omen, an Ontology Mapping ENhancer uses a set of meta-rules that capture the influence of the ontology structure and the semantics of ontology relations and matches nodes that are neighbors of already matched nodes in the two ontologies. We have implemented a protype ontology matcher using probabilistic methods that can enhance existing matches between ontology concepts. Experiments demonstrate that Omen successfully identifies and enhances ontology mappings significantly.
 
Title
Preferential Reasoning on a Web of Trust
Author(s)
Stijn Heymans, Davy Van Nieuwenborgh, Dirk Vermeir
Abstract
We introduce a framework, based on logic programming, for preferential reasoning with agents on the Semantic Web. Initially, we encode the knowledge of an agent as a logic program equipped with call literals. Such call literals enable the agent to pose yes/no queries to arbitrary knowledge sources on the Semantic Web, without conditions on, e.g., the representation language of those sources. As conflicts may arise from reasoning with different knowledge sources, we use the extended answer set semantics, which can provide different strategies for solving those conflicts. Allowing, in addition, for an agent to express its preference for the satisfaction of certain rules over others, we can then induce a preference order on those strategies. However, since it is natural for an agent to believe its own knowledge (encoded in the program) but consider some sources more reliable than others, it can alternatively express preferences on call literals. Finally, we show how an agent can learn preferences on call literals if it is part of a web of trusted agents.
 
Title
Provenance-based Validation of E-Science Experiments
Author(s)
Sylvia C Wong, Simon Miles, Weijian Fang, Paul Groth, Luc Moreau
Abstract
E-science experiments typically involve many distributed services maintained by different organisations. After an experiment has been executed, it is useful for a scientist to verify that the execution was performed correctly or is compatible with some existing experimental criteria or standards. Scientists may also want to review and verify experiments performed by their colleagues. There is no existing framework for validating such experiments. Users therefore have to rely on error checking performed by the services, or adopt other ad hoc methods. This paper introduces a platform independent framework for validating workflow executions. The validation relies on reasoning over the documented emph{provenance} of experiment results and emph{semantic descriptions} of services advertised in a registry. This validation process ensures experiments are performed correctly, and thus results generated are meaningful. The framework is tested in a bioinformatics application that performs protein compressibility analysis.
 
Title
Piggy Bank: Experience The Semantic Web Within Your Web Browser
Author(s)
David Huynh, Stefano Mazzocchi, David Karger
Abstract
The Semantic Web project envisions a new Web wherein information is offered free of presentation, allowing more effective exchange and mixing across web sites and across web pages. But without substantial Semantic Web content, few tools will be written to consume it; without many such tools, there is little appeal to publish Semantic Web content.

To break this chicken-and-egg problem, thus enabling more flexible information access, we have created a web browser extension called Piggy Bank that extracts Semantic Web content from Web content as users browse the Web. Wherever Semantic Web content is not available, Piggy Bank can invoke screenscrapers to re-structure information within web pages into Semantic Web format. Through the use of Semantic Web technologies, Piggy Bank provides direct, immediate benefits to users in their use of the existing Web. Thus, the existence of even just a few Semantic Web-enabled sites or a few scrapers already benefits users. Piggy Bank thereby offers an easy, incremental upgrade path to users without requiring a wholesale adoption of the Semantic Web's vision.

To further improve this Semantic Web experience, we have created Semantic Bank, a web server application that lets Piggy Bank users share the Semantic Web information they have collected, enabling collaborative efforts to build sophisticated Semantic Web information repositories through simple, everyday's use of Piggy Bank.

 
Title
Querying Ontologies: A Controlled English Interface for End-users
Author(s)
Abraham Bernstein, Esther Kaufmann, Anne Goehring, Christoph Kiefer
Abstract
The semantic web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting a natural language interface to semantic web querying. The interface allows formulating queries in Attempto Controlled English (ACE), a subset of natural English. Each ACE query is translated into a discourse representation structure - a variant of the language of first-order logic - that is then translated into an N3-based semantic web querying language using an ontology-based rewriting framework. As the validation shows, our approach offers great potential for bridging the gap between the semantic web and its real-world users, since it allows users to query the semantic web without having to learn an unfamiliar formal language. Furthermore, we found that users liked our approach and designed good queries resulting in a very good retrieval performance (90% precision and 90% recall).
 
Title
Rapid Benchmarking for Semantic Web Knowledge Base Systems
Author(s)
Sui-Yu Wang, Yuanbo Guo, Abir Qasem, Jeff Heflin
Abstract
We present a method for rapid development of benchmarks for Semantic Web knowledge base systems. At the core, we have a synthetic data generation approach for OWL that is scalable and models the real world data. The data-generation algorithm learns from real domain documents and generates benchmark data based on the extracted properties relevant for benchmarking. We believe that this is important because relative performance of systems will vary depending on the structure of the ontology and data used. However, due to the novelty of the Semantic Web, we rarely have sufficient data for benchmarking. Our approach helps overcome the problem of having insufficient real world data for benchmarking and allows us to develop benchmarks for a variety of domains and applications in a very time efficient manner. Based on our method, we have created a new Lehigh BibTeX Benchmark and conducted an experiment on four Semantic Web knowledge base systems. We have verified our hypothesis about the need for representative data by comparing the experimental result to that of our previous Lehigh University Benchmark. The difference in both experiments has demonstrated the influence of ontology and data on the capability and performance of the systems and thus the need of using a representative benchmark for the intended application of the systems.
 
Title
RDF Entailment as a Graph Homomorphism
Author(s)
Jean-Frantois Baget
Abstract
Semantic consequence in RDF can be computed using Pat Hayes Interpolation Lemma. In this paper, we reformulate his conditions as a graph homomorphism. We provide a direct, standalone proof of our main result (H is a logical consequence of G if and only if there is a graph homomorphism from H into G). We believe that our proof is simpler than the previous one since it only relies on basic set theory and does not require any logic tools such as Skolemization or Herbrand's interpretations. Moreover, the graphical representation of both RDF documents and their interpretations helps understanding which interpretations are models for a document.

We use this main result to give a new proof of NP-completeness of the RDF entailment problem, and exhibit new polynomial cases. The similarity of the graph homomorphism and constraint satisfaction problems gives us access to many optimization tools for RDF entailment. Finally, we discuss the problems issued by the scale of the RDF web entailment problem: given W the set of all RDF documents available on the (semantic) web and a RDF document (the query) Q, is Q a logical consequence of mathcal W?

 
Title
Reasoning with Multi-Version Ontologies: a Temporal Logic Approach
Author(s)
Zhisheng Huang, Heiner Stuckenschmidt
Abstract
In this paper we propose a framework for reasoning with multiversion ontology, in which a temporal logic is developed to serve as its semantic foundation. We show that the temporal logic approach can provide a solid semantic foundation which can support various requirements on multiversion ontology reasoning. We have implemented the prototype of MORE (Multiversion Ontology REasoner), which is based on the proposed framework. We have tested MORE with several realistic ontologies. In this paper, we also discuss the implementation issues and report the experiments with MORE.
 
Title
RelExt: A Tool for Relation Extraction from Text in Ontology Extension
Author(s)
Alexander Schutz, Paul Buitelaar
Abstract
Domain ontologies very rarely model verbs as relations holding between concepts. However, the role of the verb as a central connecting element between concepts is undeniable. Verbs specify the interaction between the participants of some action or event by expressing relations between them. In parallel, it can be argued from an ontological point of view that verbs express a relation between two concepts that specify domain and range. The work described here is concerned with relation extraction for ontology extension along these lines. We describe a system (RelExt) that is capable of automatically identifying highly relevant triples (pairs of concepts connected by a relation) over concepts from an existing ontology. RelExt works by extracting relevant verbs and their grammatical arguments (i.e. terms) from a domain-specific text collection and computing corresponding relations through a combination of linguistic and statistical processing. The paper includes a detailed description of the system architecture and evaluation results on a constructed benchmark. RelExt has been developed in the context of the SmartWeb project, which aims at providing intelligent information services via mobile broadband devices on the FIFA World Cup that will be hosted in Germany in 2006. Such services include location based navigational information as well as question answering in the soccer domain.
 
Title
Representing Web Service Policies in OWL-DL
Author(s)
Vladimir Kolovski, Bijan Parsia, Yarden Katz, James Hendler
Abstract
Recently, there have been a number of proposals for languages for expressing web service constraints and capabilities, with WS-Policy and WSPL leading the way. The proposed languages, although relatively inexpressive, suffer from a lack of formal semantics. In this paper, we provide a mapping of WS-Policy to the description logic fragment species of the Web Ontology Language (OWL-DL), and describe how standard OWL-DL reasoners can be used to check policy conformance and perform an array of policy analysis tasks. OWL-DL is much more expressive than WS-Policy and thus provides a framework for exploring richer policy languages.
 
Title
Resolution-Based Approximate Reasoning for OWL DL
Author(s)
Pascal Hitzler, Denny Vrandecic
Abstract
We propose a new technique for approximate ABox reasoning with OWL DL ontologies. It comes as a side-product of recent research results on the relationship between OWL DL and disjunctive datalog. Essentially, it relies on a new transformation of OWL DL ontologies into negation-free disjunctive datalog, and on the idea of performing standard resolution over disjunctive rules by treating them as if they were non-disjunctive ones. We analyse our reasoning approach by means of non-monotonic reasoning techniques, and present an implementation, called Screech.
 
Title
RitroveRAI: a Web application for semantic indexing and hyperlinking of multimedia news
Author(s)
Roberto Basili, Marco Cammisa, Emanuele Donati
Abstract
In this paper, a system, RitroveRAI, addressing the general problem of enriching a multimedia news stream with semantic metadata is presented. News metadata here are explicitly derived from transcribed sentences or implicitly expressed into a topical category automatically detected. The enrichment process is accomplished by searching the same news expressed by different agencies reachable over the Web. Metadata extraction from the alternative sources (i.e. Web pages) is similarly applied and finally integration of the sources (according to some heuristic of pertinence) is carried out. Performance evaluation of the current system prototype has been carried out on a large scale. It confirms the viability of the RitroveRAI approach for realistic (i.e. 24 hours) applications and continuous monitoring and metadata extraction from multimedia news data.
 
Title
RUL: A Declarative Update Language for RDF
Author(s)
Matoula Magiridou, Stavros Saxtouris, Vassilis Christophides, Manolis Koubarakis
Abstract
We propose a declarative update language for RDF graphs which is based on the paradigms of query and view languages such as RQL and RVL. Our language, called RUL, ensures that the execution of the update primitives on nodes and arcs neither violates the semantics of the RDF model nor the semantics of the given RDFS schema. In addition, RUL supports fine-grained updates at the class and property instance level, set-oriented updates with a deterministic semantics and takes benefit of the full expressive power of RQL for restricting the range of variables to nodes and arcs of RDF graphs. Our design can be immediately transferred to other RDF query languages such as RDQL or SPARQL.
 
Title
Searching Dynamic Communities with Personal Indexes
Author(s)
Alexander Löser, Christoph Tempich, Bastian Quilitz, Steffen Staab, Wolf-Tilo Balke, Wolfgang Nejdl
Abstract
Often the challenge of finding relevant information is reduced to find the 'right' people who will answer our question. In this paper we present innovative algorithms called INGA (Interest-based Node Grouping Algorithms) which integrate personal routing indices into semantic query processing to boost performance. Similar to social networks peers in INGA cooperate to efficiently route queries for documents along adaptive shortcut-based overlays using only local, but semantically well chosen information. We propose active and passive shortcut creation strategies for index building and a novel algorithm to select the most promising content providers depending on each peer index with respect to the individual query. We quantify the benefit of our indexing strategy by extensive performance experiments in the SWAP simulation infrastructure. While obtaining high recall values compared to other state of the art algorithms, we show that INGA improves recall and reduces the number of messages significant.
 
Title
Semantic Browsing of Digital Collections
Author(s)
Trevor Collins, Paul Mulholland, Zdenek Zdrahal
Abstract
This paper presents an approach to facilitate informal learning through supporting the semantic browsing of resource collections. Collections are drawn from a set of annotated resources based on the interests specified by the user. A set of presentation structures are automatically created to facilitate the exploration of the collection. This approach has been applied to produce an information system for museums that enables visitors to register their interests while visiting the museum and later access a website where they can explore a personal collection of related resources. Initial trials of the system have shown that the approach is effective for scaffolding the exploration of resources.
 
Title
Semantically Rich Recommendations in Social Networks for Sharing, Exchanging and Ranking Semantic Context
Author(s)
Wolfgang Nejdl, Stefania Ghita, Raluca Paiu
Abstract
Recommender algorithms have been quite successfully employed in a variety of scenarios from filtering applications to recommendations of movies and books at Amazon.com. However, all these algorithms focus on single item recommendations and do not consider any more complex recommendation structures. This paper explores how semantically rich complex recommendation structures, represented as RDF graphs, can be exchanged and shared in a distributed social network. After presenting a motivating scenario we define several annotation ontologies we use in order to describe context information on the user desktop and show how our ranking algorithm can exploit this information.We discuss how social distributed networks and interest groups are specified using extended FOAF vocabulary, and how members of these interest groups share semantically rich recommendations in such a network. These recommendations transport shared context as well as ranking information, described in annotation ontologies. We propose an algorithm to compute these rankings which exploits available context information and show how rankings are influenced by the context received from other users as well as by the reputation of the members of the social network with whom the context is exchanged.
 
Title
Seven Bottlenecks to Workflow Reuse and Repurposing
Author(s)
Antoon Goderis, Carole Goble, Ulrike Sattler, Phillip Lord
Abstract
To date on-line processes (i.e. workflows) built in e-Science have been the result of collaborative team efforts. As more of these workflows are built, scientists start sharing and reusing stand-alone compositions of services, or workflow fragments. They repurpose an existing workflow or workflow fragment by finding one that is close enough to be the basis of a new workflow for a different purpose, and making small changes to it. Such a "workflow by example'' approach complements the popular view in the Semantic Web Services literature that on-line processes are constructed automatically from scratch, and could help bootstrap the Web of Science. Based on a comparison of e-Science middleware projects, this paper identifies seven bottlenecks to scalable reuse and repurposing. We present initial work towards the two areas where semantic reasoning can be expected to offer most help: a comprehensive fragment discovery model and rankings for workflow fragments.
 
Title
Stable Model Theory for Extended RDF Ontologies
Author(s)
Anastasia Analyti, Grigoris Antoniou, Carlos Viegas Damasio, Gerd Wagner
Abstract
Ontologies and automated reasoning are the building blocks of the Semantic Web initiative. Derivation rules can be included in an ontology to define derived concepts based on base concepts. For example, rules allow to define the extension of a class or property based on a complex relation between the extensions of the same or other classes and properties. On the other hand, the inclusion of negative information both in the form of negation as failure and explicit negative information is also needed to enable various forms of reasoning. In this paper, we extend RDF graphs with weak and strong negation, as well as derivation rules. The ERDF stable model semantics of the extended framework (Extended RDF) is defined, extending RDF(S) semantics. A distinctive feature of our theory, which is based on partial logic, is that both truth and falsity extensions of properties and classes are considered, allowing for truth value gaps. Our framework supports both closed-world and open-world reasoning through the explicit representation of the particular closed-world assumptions and the ERDF ontological categories of total properties and total classes.
 
Title
Towards a Formal Verification of OWL-S Process Models
Author(s)
Anupriya Ankolekar, Massimo Paolucci, Katia Sycara
Abstract
In this paper, we apply automatic tools to the verification of interaction protocols of Web services described in OWL-S. Specifically, we propose a modeling procedure that preserves the control flow and the data flow of OWL-S Process Models. The result of our work provides complete modeling and verification of OWL-S Process Models.
 
Title
Towards Imaging Large-Scale Ontologies for Quick Understanding and Analysis
Author(s)
KeWei Tu, Miao Xiong, HaiPing Zhu, Jie Zhang, Yong Yu
Abstract
In many practical applications, ontologies tend to be very large and complicated. In order for users to quickly understand and analyze large-scale ontologies, in this paper we propose a novel ontology visualization approach, which aims to complement existing approaches like the hierarchy graph. Specifically, our approach produces a holistic ``imaging'' of the ontology which contains a semantic layout of the ontology classes. In addition, the distributions of the ontology instances and instance relations are also depicted in the ``imaging''. We introduce at length the key techniques and algorithms used in our approach. Then we examine the resulting user interface and find it facilitates tasks like ontology navigation, ontology retrieval and ontology instance analysis.
 
Title
Using triples for implementation: the Triple20 ontology-manipulation tool
Author(s)
Jan Wielemaker, Guus Schreiber, Bob Wielinga
Abstract
Triple20 is a ontology manipulation and visualization tool for languages built on top of the Semantic-Web RDF triple model. In this article we explain how a triple-centered design compares to the use of a separate proprietary internal data model. We show how to deal with the problems of such a low level data model and show that it offers advantages when dealing with inconsistent or incomplete data as well as for integrating tools.
 
Title
Web Service Composition with Volatile Information
Author(s)
Tsz-Chiu Au, Ugur Kuter, Dana Nau
Abstract
In many web service composition problems, information may be needed from web services during the composition process. Existing research on web service composition (WSC) procedures has generally assumed that this information will not change. We describe how to take such WSC procedures, and translate them into volatile-information WSC procedures, i.e., WSC procedures that deal correctly with volatile information.

Our first approach for doing this, the black-box approach, places a wrapper around the procedure to deal correctly with volatile information. It requires no knowledge of the WSC procedure's internals. Our second approach, the gray-box approach, requires partial information of those internals, in order to insert coding to perform certain bookkeeping operations.

We show theoretically that both approaches work correctly. We present experimental results showing that the WSC procedures produced by the gray-box approach can run much faster than the ones produced by the black-box approach.

 

 

The paper submission and reviewing process is supported by Confious
Confious - Conference Management System with Intelligence, Power and Style

Confious is a state-of-the-art conference management system that combines modern design, sophisticated algorithms and powerful engine to help program committee chairs to effortlessly accomplish complicated tasks and deliver the best experience to both reviewers and authors.

 

Super Emerald Sponsors

Gold Sponsors

Silver Sponsors

View All Sponsors


Home | How to get here | Important Dates | Registration | Social Programme
Webmaster