Past
2011
-
Katja Kevic, Planning service composition using eProPlan, 02 2011. (bachelorsthesis)
This thesis tackles the problem of automating Web Service Composition using eProPlan. A main conceptualization was elaborated, as well as the ontology in OWL2 and several Web Services were implemented. This thesis demonstrates to what extent the planner can solve other planning problems which are not related to Data Mining. We developed an application which is currently partially working, but still delivers the result that eProPlan is a well-defined, coherent system, which is able to generate Web Service Compositions and thus this thesis demonstrates the extensibility of eProPlan.
-
Yannick Koechlin, Tygrstore: a flexible framework for high performance large scale RDF storage, 03 2011. (bachelorsthesis)
This Thesis describes the Architecture of a highly flexible Triplestore Framework. Its main features are: pluggable backend storage facilities, horizontal Scalability, a simple API and the generation of endless result Streams. Special attention has been paid on easy extensibility. First a detailed view of the architecture is given, later more details on the actual implementation are revealed. In the end two possible triplestore setups are benchmarked and profiled. It is shown that the currently limiting factors do not lie within the architecture but in the library code for the backends. In the end possible solutions and enhancements to the framework are discussed.
2010
-
Patrick Minder, Aggregating social networks - entity resolution with face recognition, 08 2010. (bachelorsthesis)
The Internet, especially social network sites have become an integral part of our daily lives. Personal data, stored in Internet resources, build a huge data set for social network analyis. This bachelor thesis evaluates the feasibility of an entity resolution system based on face recogntion with the goal to integrate several social networks in an aggregated one.
-
Damian Schärli, AMIS risk score application - Applikationsaufbau und Vergleich mit Grace Score, 08 2010. (bachelorsthesis)
For the patients best care and assistance there are more and more new approaches and helps. There is a need of data for a forecast of their medical condition. The evaluation of this data assists the doctors with the planing of the medical therapy cycle. This thesis takes an established algorithm and describes a new software, which can do this evaluation of data. The old software is replaced by this newly developed program. There is the possibility to evaluate big amount of data, instead of single data, in statistical analysis to show how well the algorithm works. Beyond this implementation there is integrated another algorithm in the application, called Grace. Afterwards there where done some statistical comparison, that showed the AMIS Prediction accuracy is better than this of Grace.
-
David Oertle, Kostenstellenbericht fu?r Professoren, 09 2010. (bachelorsthesis)
The professors of the University of Zurich never had the possibility to access their financial information, which are stored and managed in the SAP system. In David Oertle?s bachelor thesis, supported by the Business Applications BAP department of the University of Zurich, a project was conducted that should resolve this issue. It resulted a SAP Web Dynpro based web application, on which access can be granted through the already existing lecturer?s portal. The program allows the users, among other things, to survey the actual status of their cost units as well as to see their bookings and to export them.
-
Minh Khoa Nguyen, Optimized disk oriented tree structures for RDF indexing: the B+Hash Tree, 08 2010. (bachelorsthesis)
The increasing growth of the Semantic Web has substantially enlarged the amount of data available in RDF (Resource Description Framework) format. One proposed solution is to map RDF data to relational databases. The lack of a common schema, however, makes this mapping inefficient. RDF-native solutions often use B+Trees, which are potentially becoming a bottleneck, as the single key-space approach of the Semantic Web may even make their O(log(n)) worst case performance too costly. Alternatives, such as hash-based approaches, suffer from insufficient update and scan performance. In this thesis a novel type of index structure called B+HASH TREE is being proposed, which combines the strengths of traditional B-Trees with the speedy constant-time lookup of a hash-based structure. The main research idea is to enhance the B+Tree with a Hash Map to enable constant retrieval time instead of the common logarithmic one of the B+Tree. The result is a scalable, updatable, and lookup-optimized, on-disk index-structure that is especially suitable for the large key-spaces of RDF datasets. The approach is evaluated against existing RDF indexing schemes using two commonly used datasets and show that a B+HASH TREE is at least twice as fast as its competitors ? an advantage that this thesis shows should grow as dataset sizes increase.
-
Patrick Leibundgut, Ranking im Vergleich mit Hyperrectangle und Normalisierung als Verfahren zur Klassifizierung von Daten, 09 2010. (bachelorsthesis)
For the classification of instances you can use different methods. The use of geometric distance or semantic distance for the kNN method provides a different result depending on the distribution of the attributes. The semantic distance is because of its incorrect interpretation of the distance significantly less correct with the classifications and thus proves to be unsuitable for a classification. The comparison of the results of the ranking and the normalization as pre-processing methods shows, that the ranking got better results in the classification as the normalization with skew distributed attributes. The normalisation performs better for attributes, that are not skew distributed.
2008
-
Samuel Galliker, Generierung von synthetischen Banktransaktionsdaten, September 2008. (bachelorsthesis)
The java code that was worked out to generate synthetic bank transactions with real distribution figures is explained in the present bachelor thesis: The Transaction Evaluator analyses the structure of the original data, before the Transaction Builder generates the synthetic data based on the ascertained properties. Further, the performance of the program is evaluated on the basis of two test sets. It turns out that the implementation works and the results are pleasant; however it still remains to find the ideal settings.
-
Christian Kündig, User Model Editor for Ontology-based Cultural Personalization, February 2008. (bachelorsthesis)
Past research has shown that personalized applications can increase user satisfaction and productivity. Cultural user modelling helps exploiting these advatages by lowering the impact of the bootstrapping process. Cultural user modells don?t require tedious capturing processes, as they can profit from already known preferences funded in the users cultural background. This bachelor thesis explains the fundamentals of cultural user modelling, personalization and as well the privacy aspects of concern. Ultimatately a user modelling system based on the cultural user model ontology CUMO is presented and implemented. This Systems allows a user to maintain his user model and to give access to it to external applications.
2007
-
Stefan Christiani, A study on activity and location recognition using various sensors, October 2007. (bachelorsthesis)
In our everyday life, we move through different environments and undertake different activities. During some of those moments, we can handle disturbances, in others they become intolerable. Because of this, context sensitive mobile-phones become a potentially important part of our future. This paper presents an experiment, in which a series of sensors was analysed for their capacity to predict the context of a mobile phone on which they were attached to. This test device was then used to record data in real world scenarios, and the accuracy of the resulting predictions was measured. It could be shown that both activities and locations could be detected quite reliably in real world conditions and that certain sensors fare better then others.
-
Christian Kündig, A User Model Editor for Ontology-based Cultural Personalization 2007. (bachelorsthesis)
-
Anthony Lymer, Adaptivität im E-Learning: Entwicklung eines Ajax-basierten Eintrittstests für den Einsatz in Lernplattformen, October 2007. (bachelorsthesis)
Das Ziel dieser Arbeit ist es, einen adaptiven Eintrittstest fu?r ein bereits existierendes Lernsystem zu gestalten. Dabei handelt es sich um das CasIS-Portal, welches das Bearbeiten von Fallstudien elektronisch unterstu?tzt. In dieser Arbeit wird auf den Eintrittstest und dessen Schnittstellen zu diesem Portal eingegangen. Der Eintrittstest soll dem Benutzer vor der Bearbeitung einer Fallstudie helfen, indem er sein aktuelles Können einschätzt und ihn dementsprechend berät. Nachdem der Test die Kompetenz des Benutzers gepru?ft hat, werden ihm Lernmaterialien angeboten, damit ihm die Möglichkeit eingeräumt wird, sich optimal auf die Fallstudie vorzubereiten. Der Eintrittstest ist dabei nicht als Hu?rde zu verstehen, sondern bietet vielmehr eine Option, um Informationen u?ber das eigene Können zu erhalten. Die Arbeit umfasst sowohl die Entwicklung eines Authoring-Tools um Tests zu erstellen als auch eine Delivery-Engine, welche den Benutzer mit Fragen beliefert und ihm anschliessend Vorbereitungsmodule anbietet.The aim of this thesis is to develop an adaptive assessment test for the CasIS-Portal - an already existing e-learning system, which assists users electronically with solving case studies. This thesis is concerned with designing and attaching an assessment test to the portal. The test should give support to a user by estimating his current knowledge and then informing him accordingly. After a test has been taken and the candidate?s competence has been assessed, he will be shown the learning material, such that he has the possibility of preparing himself optimally for the subsequent case study. The test itself is not to be understood as an obstacle, but rather it provides an opportunity of obtaining information about one?s current knowledge level. The work done comprises not only the development of an authoring-tool to create tests but also a delivery-engine, which presents questions to a candidate and offers him preparation modules.
-
Peter Höltschi, Ein regel- und statistikbasiertes Empfehlungssystem für das Masterstudium in Informatik, September 2007. (bachelorsthesis)
In dieser Bachelorarbeit wird ein regel- und statistikbasiertes Empfehlungssystem für die Planung des Masterstudiums in Informatik an der Universität Zürich spezifiziert, entworfen und an einem Prototypen erprobt. Das System unterstützt Informatikstudierende bei der automatischen Erstellung von Studienplänen. Dadurch wird zum einen die Einhaltung der Studienreglemente garantiert. Andererseits erhalten die Studierenden ein Bild darüber, wie ihr Masterstudium aussehen könnte. Sie müssen dazu die Daten ihres Leistungsausweises zur Verfügung stellen und Präferenzen zur Studienrichtung und zur Modulwahl angeben. Aufgrund dieser Daten erstellt das System mittels mehrerer Filter- und Sortierfunktionen die gewünschten Studienpläne. In einer Evaluation wurden Studierende um die manuelle Erstellung eines Studienplans und der Angabe der Daten zur automatischen Erstellung angefragt. Eine Analyse der Resultate und ein Vergleich zwischen dem manuellen und dem automatisch erstellten Studienplan hat ergeben, dass die Qualität von letzterem stark von der Qualität und der Menge der Präferenzangaben des Studenten abhängt. Zudem kam heraus, dass das System zur optimalen Nutzung mit zusätzlichen Features ausgestattet werden sollte. This Bachelor Thesis describes the specification, design and implementation of a rule- and statistics based recommendation system for the planning of the master study in informatics at the University of Zurich. The system supports students in automatically generating study plans. On one hand, this guaranties the compliance with the reglements of study. On the other hand, the students quickly get a picture of how a master study plan can look like. For this to work, the student has to provide data of his transcript of records, some details concerning his course of study and a choice of preferred lecture contents. Based on this data, the system generates the study plans using filtering and sorting functions. In an evaluation, some students were asked to provide a manually created study plan and the data for automatically generating study plans. The analysis of the results and a comparison of the manually and automatically generated study plans showed that the quality and quantity of the provided data have a strong impact on the quality of the resulting study plans. To enhance the system, further features should be implemented.
-
Michael Imhof, Entwicklung eines RDF Parsers für transaktionsbasierte Daten, August 2007. (bachelorsthesis)
Der Java RDF Parser (JRP) ist ein Programm zum Einlesen von Files im RDF Format und Extrahieren von transaktionsbasierten Daten, die anschliessend in einer Datenbank gespeichert werden können. Diese Arbeit handelt von der Entwicklung von JRP und bietet dem Leser einen Einblick in das Design des Codes, das Datenbank-Schema und die Anbindung sowie eine Evaluation von Jena, der Java Library die fu?r das Parsen der Daten benutzt wird. Selbstverständlich wurde das Programm mit realen Daten getestet und bewies auf diese Art und Weise seine korrekte Funktionalität. Leider kann bis jetzt nichts u?ber die Skalierbarkeit des Parsers gesagt werden, da fu?r die Performance Tests keine grossen Datensätze vorhanden waren. The Java RDF Parser (JRP) is a program to read in files in RDF format and extract transactional data from it that can be stored in a database afterwards. This thesis is about the development of JRP and gives an insight into the design of the code, the database schema and connection, as well as an evaluation of Jena, the Java library that is used to parse the input files. Naturally, the program was tested with real data and proved the desired functionality. Unfortunately, nothing can be said about the scalability of the parser, because there were no large datasets available for performance tests.
-
Stefan Schurgast, Export von Datenbankinhalten in Datenformate von Statistikprogrammen, December 2007. (bachelorsthesis)
Das sesamDB Projekt ist ein Teilprojekt der interdisziplinären Langzeitstudie sesam zur Ätiologie von psychischen Erkrankungen. Es beschäftigt sich mit der Entwicklung der Datenbank fu?r wissenschaftliche und administrative Daten von sesam sowie der Implementierung verschiedener Clientanwendungen. Um die in sesam erhobenen Daten mittels Statistiksoftware analysieren zu können, wurde eine Applikation entwickelt, die Daten aus der sesamDB in gängige Statistikformate exportiert. Eine grafische Benutzeroberfläche ermöglicht es dem Anwender, die benötigten Daten ohne Kenntnisse u?ber den Datenbankaufbau oder Datenanfragesprachen zu erhalten. Diese Arbeit enthält eine Zusammenstellung verwandter Arbeiten sowie den Entwicklungsprozess und die Architektur des Exportprogramms, Sesam Export Manager.The sesamDB Project is a subproject of the interdisciplinary long time study sesam about the etiology of mental health. Its main task is to develop a database for scientific and administrative data for sesam as well as the implementation of client applications. In order to analyze the stored data with statistical analysis software, Sesam Export Manager has been built to extract data from sesamDB to data types of popular statistics applications. The therefore developed graphical user interface helps the user to obtain the data he needs without having knowledge of the underlying database schemes or query languages. This paper contains a composition of related work, the development process and the architecture of Sesam Export Manager.
-
Philippe Hungerbühler, The Influence of SPAM on Performance, August 2007. (bachelorsthesis)
Almost every Internet user knows about the problem of SPAM. At work, especially, it costs time to sort out irrelevant emails. This thesis deals with the problem of SPAM and its consequences on productivity at work. For this reason an experiment has been conducted to examine the distraction of SPAM and its perception. A few hypotheses, stated in advance, have been reviewed on basis of this experiment. The results and the interpretation are presented and discussed in this thesis.
RDF for all publications
BibTeX for all publications
Statistics
| Reference type | Number of references |
| bachelorsthesis | 16 |
| Total | 16 |



