Search:
ifi Colloquium (Summer 2006)

Summer Term 2006

Date Speaker Title Place Lang. Host
20.4. Prof. M. Tamer Özsu
University of Waterloo, CA
Query Processing and Optimization in Native XML Databases BIN 2.A.10 English K. Dittrich
27.4. Prof. Gerti Kappel
TU Wien
From Models to Ontologies - A Layered Approach for Model-Based Tool Integration BIN 2.A.10 TBA H. Gall
4.5. Prof. Daniel Berry
University of Waterloo, CA
Requirements Engineering Lessons from House Building BIN 2.A.10 English M. Glinz
18.5. Prof. Stefan Klein
University College Dublin und Uni Münster
Multi-Kanalstrategien im Lebensmittelhandel BIN 2.A.10 German A. Bernstein, G. Schwabe
30.5. Prof. Kwan-Liu Ma
UC Davis
Visual Analysis of Large Heterogeneous Social Networks BIN 2.A.10 English R. Pajarola
moved to 22.6. Jana Koehler
IBM Zurich Research Laboratory
The Role of Visual Modeling and Model Transformations in Business-driven Development BIN 2.A.10 English H. Gall
15.6. Prof. Haym Hirsh
Rutgers University
Version Spaces and the Consistency Problem BIN 2.A.10 English A. Bernstein
29.6. Dr. Sophia Ananiadou
University of Manchester, UK
The UK National Centre for Text Mining (NaCTeM): Overview of activities and tools BIN 2.A.10 English M. Hess
4.7.
16:15h
Prof. Josie Taylor
The Open University, UK
Methods for studying learning, collaboration and technology use in mobile environments BIN 2.A.10 English G. Schwabe

General Information

Das Kolloquium findet sofern nicht anders vermerkt im Raum 2.A.10 des Instituts für Informatik (IfI) an der Binzmühlestrasse 14, 8050 Zürich statt.

Unless mentioned differently, all colloquia (usually) take place from 5.15 to app. 6.30 pm in room 2.A.10 of the Department of Informatics (IfI) at the der Binzmühlestrasse 14, 8050 Zürich.
Visiting a colloquium is free of charge and does not require a registration.
If you have further questions please feel free to contact Eveline Suter.

To top

Query Processing and Optimization in Native XML Databases

Speaker: Prof. M. Tamer Özsu

Abstract

XML has emerged from a markup language for web pages to the de facto language for data exchange over the World Wide Web. Declarative query languages, such as XPath and XQuery, are proposed for querying over large volumes of XML data similar to the use of SQL in relational databases. Over the past few years, many techniques have been proposed to evaluate XML queries more efficiently. Many of these techniques, in addition to being appropriate for XML data, are also applicable to other data sources that can be explicitly or implicitly translated into XML/hierarchical data model. In this talk, I will first give an overview of the database management issues related to storing and querying large volumes of XML data. Then I will focus on some query processing and optimization techniques that we have developed in the XDB project at the University of Waterloo. Specific discussion topics include a succinct native XML storage system, a physical operator based on the storage system, and a synopsis structure for estimating the cardinality of a path expression. Finally, I will give an outline of our ongoing research and possible applications of these techniques to other fields of computer science, e.g., multimedia data management.

This is joint work with Ning Zhang.

Bio

M. Tamer Özsu is a Professor of Computer Science and University Research Chair at the University of Waterloo. His current research focuses on three areas: (a) Internet-scale data distribution that emphasizes stream data management and peer-to-peer databases; (b) multimedia data management, concentrating on similarity-based retrieval of time series and trajectory data; and (c) structured document management mainly within the context of XML query processing and optimization.

He serves on the editorial boards of ACM Computing Surveys, Distributed and Parallel Databases Journal, World Wide Web Journal, Information Technology and Management, and Springer Book Series on Advanced Information & Knowledge Processing. He is the past Chair of ACM SIGMOD and the former Coordinating Editor-in-Chief of The VLDB Journal. He has served as the Program Chair of VLDB (2004), WISE (2001), IDEAS (2003), and CIKM (1996) conferences and the General Chair of CAiSE (2002) conference. He will serve as the co-General Chair of WISE 2006 that will be held in Wuhan, China and co-PC chair of ICDE 2007 to be held in Istanbul, Turkey. He serves on the ACM Publications Board.

To top

From Models to Ontologies - A Layered Approach for Model-Based Tool Integration

Speaker: Prof. Gerti Kappel, Technical University Vienna

Abstract

The exchange of models among different modeling tools ever more becomes an important prerequisite for effective software development processes. Due to a lack of interoperability, however, it is often difficult to use tools in combination, thus the potential of model-driven software development cannot be fully exploited. This paper proposes ModelCVS, a system which enables tool integration through transparent transformation of models between different tools' modeling languages expressed as MOF-based metamodels. ModelCVS provides versioning capabilities exploiting the rich syntax and semantics of models. Concurrent development is enabled by storing and versioning software artifacts that clients can access by a check-out/check-in mechanism, similar to a traditional CVS server. Semantic technologies in terms of ontologies are used together with a knowledge base to store machine-readable, tool integration relevant information, thus allowing to minimize repetitive effort and partly automate the integration process.

Bio

Gerti Kappel war Professorin für Informationssysteme an der Johannes Kepler Universität Linz und ist seit Oktober 2001 Professorin für Wirtschaftsinformatik an der Technischen Universität Wien. Sie leitet die Business Informatics Group des Instituts fuer Softwaretechnik und Interaktive Systeme und ist derzeit auch Studiendekanin fuer Wirtschaftsinformatik. Sie beschäftigt sich in Forschung und Lehre mit objektorientierter Softwareentwicklung, Web Engineering und Model Engineering sowie deren Anwendung in Workflow Management und Electronic Commerce. Sie ist u.a. Koautorin des Buchs "UML@Work - Objektorientierte Modellierung mit UML 2" (dpunkt.verlag, 2005) und Mitherausgeberin des Buchs "Web Engineering - Systematische Entwicklung von Web-Anwendungen" (dpunkt.verlag, 2004).

To top

Requirements Engineering Lessons from House Building

Speaker: Prof. Daniel Berry

Abstract

Anyone who has built or remodeled a house and has developed or enhanced SW must have noticed the similarity of these activities. This talk describes some lessons about requirements engineering I learned while being a customer in a house building and two house remodelings. The biggest problem is to avoid very expensive requirements creep. The main lesson is the importance of the customer insisting on following a full requirements engineering process, including goal identification, requirements elicitation, analysis, and specification, and validation of the specification. A secondary lesson is that a customer has an important role in requirements engineering and he or she sometimes needs to learn that role.

Bio


Daniel M. Berry got his B.S. in Mathematics from Rensselaer Polytechnic Institute, Troy, New York, USA in 1969 and his Ph.D. in Computer Science from Brown University, Providence, Rhode Island, USA in 1974. He was on the faculty of the Computer Science Department at the University of California, Los Angeles, California, USA from 1972 until 1987. He was in the Computer Science Faculty at the Technion, Haifa, Israel from 1987 until 1999. From 1990 until 1994, he worked for half of each year at the Software Engineering Institute at Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, where he was part of a group that built CMU's Master of Software Engineering program. During the 1998-1999 academic year, he visited the Computer Systems Group at the University of Waterloo in Waterloo, Ontario, Canada. In 1999, Berry moved to the School of Computer Science at the University of Waterloo. Prof. Berry's current research interests are software engineering in general, and requirements engineering and electronic publishing in the specific.

To top

Multi-Kanalstrategien im Lebensmittelhandel

Speaker: Stefan Klein, University College Dublin and University of Münster

Abstract

Der Vortrag greift die Diskrepanz zwischen der Normstrategie "Kanalintegration" und der empirisch zu beobachtenden Vielfalt von Multi-Kanalstrategien auf. Auf der Basis einer 5-Länder Studie wird eine Klassifikation von Kanalstrategien vorgestellt und es werden mögliche Einflussfaktoren auf die Strategiewahl erörtert.

Bio

Nach der Promotion (Dr. rer. pol.) im Bereich Unternehmungsplanung an der Universität zu Köln, hat er mehrere Jahre in der Forschung gearbeitet, zunächst bei der Gesellschaft für Mathematik und Datenverarbeitung in Köln und dann am Center for European Studies der Harvard University in Cambridge, Mass. Von 1993 - 1996 war er Projektleiter des Kompetenzzentrums Elektronische Märkte am Institut für Wirtschaftsinformatik der Universität St. Gallen und Assistenzprofessor für Betriebswirtschaftslehre mit besonderer Berücksichtigung des Informationsmanagement. Im Wintersemster 1996/97 war er Professor für Wirtschaftsinformatik an der Universität Koblenz Landau.

To top

Visual Analysis of Large Heterogeneous Social Networks

Speaker: Kwan-Liu Ma

Abstract

Social network analysis is an active area of study beyond sociology. It uncovers the invisible relationships between actors in a network and provides understanding of social process and behaviors. It has become an important technique in a variety of application areas such as the Web, organizational studies, and homeland security. Visualization proves effective to aid in detecting and understanding the hidden features and patterns in massive, dynamically changing information spaces. I will present a visual analysis tool for understanding large, heterogeneous social networks, e.g., a terrorism network, in which nodes and links could represent different concepts and relations, respectively. I will also present a visualization design for monitoring a large software development team and the evolution of the software system.

Bio

Kwan-Liu Ma is a professor of computer science at the University of California at Davis. He received his PhD in computer science from the University of Utah in 1993. Before he joined UC Davis he was with ICASENASA LaRC as a research scientist. Professor Ma's research spans the fields of visualization, computer graphics, and high performance computing. In 2000, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in large data visualization. He has been actively participating in several national-scale research programs sponsored by the US National Science Foundation (NSF) and Department of Energy. Presently, his is leading a team of 12 PhD students studying the problems of visualizing terascale scientific simulations, cyber security, homeland secrurity, social networks, etc., as well as developing new visualization methodologies, infrastructures, and interfaces. Over the past year, he has organized a workshop on Visualization for Computer Security (VizSEC 2005) and a workshop on Time-Varying Data Visualization, both of which were sponsored by NSF. More information about Professor Ma's work can be found at www.cs.ucdavis.edu/~ma/.

To top

The Role of Visual Modeling and Model Transformations in Business-driven Development

Speaker: Jana Koehler, IBM Zurich Research Laboratory

Abstract

The talk explores the emerging paradigm of business-driven development, which presupposes a methodology for developing IT solutions that directly satisfy business requirements and needs. At the core of business-driven development are business processes, which are usually modeled by combining graphical and textual notations. During the business-driven development process, business-process models are taken down to the IT level, where they describe the so-called choreography of services in a Service-Oriented Architecture. The derivation of a service choreography based on a business-process model is simple and straightforward for toy examples only| for realistic applications, many challenges at the methodological and technical level have to be solved. The talk explores these challenges and describes selected solutions that have been developed by the research team of the IBM Zurich Research Laboratory.

Short Bio

Jana Koehler is manager of the Business Integration Technologies group in the Services and Software Department of the IBM Zurich Research Lab. The group works on model-driven technologies for Business-IT integration based on Service-Oriented Architectures. Jana Koehler built up this new research area that focuses on the intersection between services and software after joining IBM in Spring 2001. Prior to her work for IBM, she has been working at the German Research Center for AI, the International Computer Science Institute at Berkely, the University of Freiburg, and Schindler AG. Jana Koehler won several scientific and best-paper awards and was nominated full and associate professor in Computer Science.

To top

Version Spaces and the Consistency Problem

Speaker: Haym Hirsh, Computer Science Department, Rutgers University

Abstract

In the late 1970s Tom Mitchell introduced the concept of a "version space", the collection of all classifiers in a concept class that correctly label a given set of training data. Version spaces have since proven to be a useful analytical tool for machine learning, but face several known intractabilities. Mitchell's original proposal represented a version space by its boundary sets: the maximally general (G) and maximally specific (S) classifiers consistent with the data. Unfortunately, for many simple concept classes, the size of G and S is known to grow exponentially in the amount of data, and indeed in some theoretical cases they can be infinite or ill-defined. This work argues that previous work on alternative version-space representations attempting to address these intractabilities has disguised the real question underlying version spaces. We instead show that tractable reasoning with version spaces turns out to depend on the consistency problem: determining if there is any classifier in the concept class consistent with a set of training data. Indeed, we show that tractable version space reasoning is possible if and only if there is an efficient algorithm for the consistency problem. Our observations give rise to new concept classes for which tractable version space reasoning is now possible, including 1-decision lists, monotone depth two formulas, and halfspaces.

This is joint work with Nina Mishra and Leonard Pitt

Bio

Haym Hirsh spent the first quarter-century of his life in California, receiving his BS degree in 1983 from the Mathematics and Computer Science departments at UCLA and his MS in 1985 and PhD in 1989 from the Computer Science Department at Stanford University. Unhappy with the weather, he moved to Pittsburgh when he found a way to spend his final year at Stanford at the University of Pittsburgh and Carnegie Mellon University. The following year he achieved his life-long dream of living in New Jersey by joining the faculty of the Computer Science Department at Rutgers University, where he is Professor and Department Chair. As part of his never-ending spiritual quest, he has also spent time as visiting faculty at the Computer Science Department at Carnegie Mellon University in Fall 1995, the Artificial Intelligence Laboratory and Laboratory for Computer Science at MIT in Fall 1997, and the Information Systems Department at the Stern School of Business at NYU in Fall 2000 and Spring 2001. When he is not teaching courses or conducting research, he writes silly biographies with lots of gratuitous pointers to other web pages.

To top

The UK National Centre for Text Mining (NaCTeM): Overview of activities and tools

Speaker: Sophia Ananiadou, Reader in Text Mining, School of Informatics, University of Manchester

Abstract

I will present the main activities and tools of NaCTeM focusing on information extraction and terminology management from biomedical texts.
One of the main challenges in biomedical text mining is terminology processing. Due to the evolving nature of biomedicine new terms are constantly created. Existing knowledge sources cannot cope with the amount of neologisms but also with the different types of term variation. In this talk, I will present solutions to the problem of term variation focusing on acronym recognition. I will conclude by giving brief demos of the systems TerMine, Medusa and InfoPubMed currently used at NaCTeM.

Bio

Sophia Ananiadou is a reader in text mining at the University of Manchester, and a deputy director of the UK National centre for Text Mining. She has worked on various projects related with sublanguage knowledge acquisition, terminology processing, machine translation, funded by industry, the EU and UK research councils. Her current research interests are bio-text mining and in particular terminology management from biomedical texts.

To top

Methods for studying learning, collaboration and technology use in mobile environments

Speaker: Prof. Josie Taylor, The Open University, UK

Abstract

The key issue addressed in this talk relates to methodological challenges of trying to satisfy various stakeholders when evaluating learning and technology use in informal settings. A method is introduced for analysing user behaviour, practices, strategies and conflicts that emerge when interacting with technological systems in an informal mobile learning setting. These issues are addressed from the point of view of user interactions in both a semiotic and technological space. This work is rooted in cultural historical activity theory, and develops Engestrom's (1987) extended model of human activity.

Bio

Josie Taylor is Professor of Learning Technology in the Institute of Educational Technology at the Open University. Her doctorate is in Cognitive Sciences (Sussex University). She is Co-Director of the IET UserLab, a group of researchers investigating pedagogy and learning in technology augmented environments, working primarily in large international consortia on projects funded by the European Commission. Her research focuses on the nature of learning, the semiotic and technological contexts in which it occurs, and the design of systems to support such learning. This includes systems design, interface, interaction and activity design, as well as user requirements and evaluation. She has recently been investigating mobile learning in the EU-funded MOBIlearn project, and the Kaleidoscope Mobile Learning Initiative. She has played an advisory role in both national and international activities on strategies for e-learning and pedagogy, and on evaluation methodology. Dr. Taylor was recently funded by a consortium of higher education funding agencies to conduct a UK-wide consultation of academic departments on the priorities for research in e-learning in the UK to inform funding policy. This has contributed to the development of a large national funding programme by two of the research councils (EPSRC/ESRC) which has just been launched. As the Open University is about to go Open Source, adopting Moodle as its VLE, Prof. Taylor will be working on the Open Content Initiative in the UserLab, funded by the Hewlett foundation (£5.65m/US $9.9m), with her Co-Director colleague Dr. Patrick McAndrew (oci.open.ac.uk/pressrelease.html).

To top