Projekte, Workshops und Konferenzen

Next Events

2nd International Workshop on Variability and Evolution of Software-Intensive Systems 2019

Just like software in general, software product lines are permanently subject to change. This introduces evolution as a second problem dimension in addition to variability, which is the primary phenomenon addressed by software product line engineering. Traditionally, the methods and tools applied for revision control and variant management are radically dierent and mutually disjoint, although research has already suggested that evolution and variability can be tackled in a holistic way. Concrete examples of integrating approaches include uniform or unified versioning, delta-orientation in connection with hyper feature models, evolution-aware clone-and-own, projectional SPL editing, and variation control systems.

VariVolution the 2nd International Workshop on Variability and Evolution of Software-intensive Systems) aims at bringing together active researchers studying software evolution and variability from dierent angles as well as practitioners who encounter these phenomena in real-world applications and systems. The workshop oers a platform for exchanging new ideas and fostering future research collaborations and synergies.

Organizing Committee

  • Michael Nieke (TU Braunschweig, DE)
  • Jacob Krüger (OVGU Magdeburg, DE)
  • Lukas Linsbauer (JKU Linz, AT)
  • Thomas Leich (Harz University Wernigerode, DE)

Important Dates

The important dates for the workshop are aligned with the general workshop due dates of SPLC:

  • Workshop papers submission: May 28, 2019
  • Workshop papers notification: June 18, 2019
  • Final version of papers (camera ready): July 2, 2019
  • Workshop: September 09/10, 2019 at SPLC

14th International Working Conference on Variability Modelling of Software-Intensive Systems 2020

Vamos 2020 will be hosted at Lukasklause, close to the Unviersity of Magdeburg, with pure-systems and the DBSE working group being responsible for the local organization.

PC Chairs

  • Maxime Cordy
  • Mathieu Acher

Local Organizers

  • Danilo Beuche
  • Gunter Saake

Website and Track Organization

  • Jacob Krüger

Local Supporters

  • Sebastian Krieter
  • Kai Ludwig
  • Ivonne von Nostitz-Wallwitz
  • Sandro Schulze

 


 

Current Funded Projects

  • A Common Storage Engine for Modern Memory and Storage Hierarchies (SMASH)

Data is becoming increasingly important in science, and this is causing problems with how research is done. Since data is so important, more and more research is being done that requires a lot of data to be processed. This means that traditional systems used to manage data (like file systems and database management systems) are having a hard time keeping up with the flood of data. One problem is that traditional systems rely on different types of memory (primary, secondary, and tertiary) which can have very different performance characteristics. This makes it difficult to take advantage of the unique features of each type of memory. There are also some new technologies that can help with managing data, but they are still having a hard time dealing with the sheer amount of data that is being produced. Therefore, in order to investigating the benefits of a common storage engine that manages a heterogeneous storage landscape, including traditional storage devices and non-volatile memory technologies, SMASH is created. SMASH is part of DFG Priority Program 2377 and is a joint initiative by the DBSE and ParCIO groups at Otto-von-Guericke-Universitat. The core objective of the project is investigating the benefits of a common storage engine that manages a heterogeneous storage landscape, including traditional storage devices and non-volatile memory technologies. It aims to provide a prototypical standalone software library to be used by third-party projects. High-performance computing workflows will be supported through an integration of SMASH into the existing JULEA storage framework. Database systems will be able to use the interface of SMASH directly whenever data is stored or accessed.

Leader: Prof. Dr. Gunter Saake,Jun.-Prof. Dr. Michael Kuhn,Dr.-Ing. David Broneske
Members: Sajad Karim,Johannes Wünsche
Keywords: Non-Volatile Memory, Intel® Optane™ DC Persistent Memory Module, Bε-Tree, Write-Optimized Storage Engine
Type: Drittmittelprojekt
Funded by: DFG Priority Program 2377
Funded: October 2022 bis October 2025
Website: SMASH
  • Optimizing graph databases focussing on data processing and integration of machine learning for large clinical and biological datasets

Graph databases represent an efficient technique for storing and accessing highly interlinked data using a graph structure, such as connections between measurements and environmental parameters or clinical patient data. Its flexible node structure makes it easy to add the results of different examinations covering simple blood pressure measurements, the latest CT and MRT scans, or high-resolution omics analyses (e.g., from tumor biopsies, gut microbiome samples). However, the full potential of data processing and analyses using graph databases is not yet exploited completely in biological and clinical use cases. Especially the huge amount of interconnected data to be loaded, processed, and analyzed results in too long processing times to be integrated into clinical workflows. To this end, novel graph-operator optimizations, as well as a suitable integration of analysis approaches, are necessary.This proposal aims to solve the aforementioned problems in two directions: (i) proposing suitable optimizations for graph database operations, also incorporating the usage of modern hardware, and (ii) the integration of machine learning algorithms for an easier and faster analysis of the biological data. For the first direction, we investigate the state of the art in graph database systems and their storage as well as their processing model. Subsequently, we propose optimizations for efficient graph maintenance and analytical operators. For the second direction, we envision to bring machine learning algorithms closer to their data providers – the graph databases. To this end, as a first step, we feed machine learning algorithms directly with the graph as input by designing suitable graph operators. As a second step, we integrate machine learning directly into the graph database by adding special nodes to represent the model of the machine learning algorithm.The results of our project are improved operators exploiting modern hardware as well as integration concepts for machine learning algorithms. Our generally devised approaches will push operating and analyzing huge graphs in a plethora of use cases beyond our aimed use case of biological and clinical data analysis.

Leader: Prof. Dr. Gunter Saake, Jun.-Prof. Robert Heyer
Members: Daniel Walke
Keywords: Keywords
Type: Drittmittelprojekt
Funded by: Land Sachsen-Anhalt
Funded: December 2021 bis November 2024
  • AdaptiveAdaptive Data Management in Evolving Heterogeneous Hardware/Software Systems (ADAMANT-II)

Our aim is to develop new processing concepts for exploiting the special characteristics of hardware accelerators in heterogeneous system architectures for classical and non-classical database systems. On the system management level, we want to research alternative query modeling concepts and mapping approaches that are better suited to capture the extended feature sets of heterogeneous hardware/software systems. On the hardware level, we will work on how processing engines for non-classical database systems can benefit from heterogeneous hardware and in which way processing engines mapped across device boundaries may provide benefits for query optimization. Our working hypothesis is that standard query mapping approaches with their consideration of queries on the level of individual operators is not sufficient to explore the extended processing features of heterogeneous system architectures. In the same way, implementing a complete operator on an individual device does not seem to be optimal to exploit heterogeneous systems. We base these claims on our results from the first project phase where we developed the ADAMANT architecture allowing a plug & play integration of heterogeneous hardware accelerators. We will extend ADAMANT by the proposed processing approaches in the second project phase and focus on how to utilize the extended feature sets of heterogeneous systems rather than how to set such systems up.

Bala anna
Leader: Prof. Dr. Gunter Saake
Members: Bala Gurumurthy,
Keywords: heterogeneous hardware, FPGA, Adaptive systems
Type: Drittmittelprojekt
Funded by: Deutsche Forschungsgemeinschaft (DFG, SPP 2037)
Funded: January 2021 bis December 2023
  • Digitales Programmieren im Team - Adaptive Unterstützung für kollaboratives Lernen

Das kollaborative Programmieren ist Kernbestandteil des beruflichen Alltags in der Informatik. Diese auf einer technischen und sozialen Ebene komplexen Vorgänge werden im Informatikstudium oftmals abstrakt behandelt und spielen in Fachkonzepten zum Programmierenlernen eine untergeordnete Rolle. Im Rahmen von Gruppenarbeiten müssen sich die Lernenden organisieren, koordinieren und ihre Lernprozesse regulieren. Um das Potential kollaborativer Lernformen für das Erlernen von Programmiersprachen und die Förderung sozialer Kompetenzen ausschöpfen zu können, müssen die Lernenden bei Bedarf didaktische Unterstützung erhalten, sowohl vor dem als auch während des Lernprozesses. Im Teilprojekt DiP-iT-OVGU werden wir  unterstützt durch die Projektpartner auf der Basis empirischer Studien ein digitales Fachkonzept zum kollaborativen Programmierenlernen entwickeln und evaluieren, welches diesbezügliche (medien-)didaktische Ansätze enthält. Dabei zielen wir auf die Ermöglichung des Transfers an andere Hochschulen. Auf informationstechnischer Ebene wird hierfür ein Prozessmodell entwickelt, das die Nachnutzbarkeit von Forschungsdaten und die Übertragbarkeit von Datenmodellen (z.B. zur adaptiven didaktischen Unterstützung) in andere Lehrveranstaltungen bzw. Lehr-Lernsysteme ermöglicht. Das Teilprojekt ordnet sich in das Ge samtprojekt mit folgenden Zielstellungen ein:

  • Analyse und Systematisierung von Einstellungen und Vorerfahrungen bei den Akteuren,
  • Entwicklung konzeptioneller, mediendidaktischer Kriterien für die Einbindung kollaborativen Programmierenlernens in Lehrveranstaltungen,
  • Entwicklung geeigneter Lehr-Lern-Szenarien und Erstellung eines diesbezüglichen digitalen Fachkonzepts,
  • empirische Fundierung durch formative und summative Evaluation,
  • Untersuchung der Effektivität von Formen der instruktionalen Anleitung angelehnt an die Bedarfe der Lernenden,
  • Unterstützung des Transfers der Erkenntnisse, inhaltlich und technisch.
Leader: Prof. Dr. Gunter Saake
Members: Victor Obionwu
Keywords: Teaching Programming
Type: Drittmittelprojekt
Funded by: Bund
Funded: März 2020 bis August 2023
  • COOPeR: Cross-device OLTP/OLAP PRocessing

Today's database systems face two challenges. On the one hand, database systems must combine on-line transaction processing (OLTP) and on-line analytic processing (OLAP) to enable real-time analysis of business processes. Real-time analysis of business processes is necessary to improve the quality of generated reports. It is a competitive advantage to use fresh-data rather than historical data for these reports as in traditional OLAP systems. On the other hand, computer systems are increasingly heterogeneous and provide broader hardware promising higher performance. This trend leads to an architecture shift from single-core CPUs database systems to multi-core CPUs with co-processors support. Database systems must take account of both trends in order to improve the report quality, and transaction performance to ensure that database systems meet future requirements (e.g., more complex queries or increased data volumes). Unfortunately, current research focus only on either combining OLTP and OLAP or on co-processor utilization. Therefore, there is no holistic approach for the challenge of merging these research branches. In this project, we address the challenges of database systems that process combined OLTP / OLAP workloads on heterogeneous CPU / co-processor compute platforms. The main challenge is to ensure the ACID properties for OLTP and combined OLTP / OLAP workloads in heterogeneous systems while providing for efficient processing of the combined workloads.

Leader: Prof. Dr. Gunter Saake
Members: Marcus Pinnecke
Keywords: Hybrid Workload Management, Heterogenous Database Systems
Type: Drittmittelprojekt
Funded by: Deutsche Forschungsgemeinschaft (DFG)
Funded: September 2016 bis August 2019
  • EXtracting Product Lines from vAriaNTs (EXPLANT II)

Eine Software-Produktlinie (SPL) ermöglicht die systematische Verwaltung einer Menge von wiederverwendbaren Softwareartefakten und damit die effiziente Generierung verschiedener Varianten einer Software. In der Praxis erstellen Entwickler jedoch häufig Softwarevarianten ad-hoc, indem sie Softwareartefakte kopieren und an neue Anforderungen anpassen (Clone-and-Own). Die hierbei fehlende Systematik und Automatisierung macht die Wartung und Weiterentwicklung der Varianten oft zeitaufwendig
und fehleranfällig. Wir schlagen daher eine schrittweise Migration von geklonten Softwarevarianten in eine kompositionale (d.h. modular aufgebaute) SPL vor.
In der ersten Projektphase konnten wir bereits beachtliche Ergebnisse bei der Varianten-erhaltenden Transformation und den entsprechenden Analysen auf Modell- und Codeebene erzielen. In der zweiten Phase wollen wir nun auf den daraus gewonnenen Erkenntnisse aufbauen. Dies sind im Besonderen: (1) Eine nur auf Code-Klon Detektion basierende automatisierte Migration erzeugt keine kohärenten Softwareartefakte mit einer bestimmten Funktionalität. (2) Einige potentielle Kooperationspartner hatten Bedenken ihre Systeme zu migrieren, da sie die Einführung neuer Fehler befürchten. (3) Annotative SPL scheinen weniger fehleranfällig und somit robuster gegenüber Änderungen zu sein,
als bisher angenommen. Aufgrund der Probleme mit industriellen Partnern (2) kamen wir zu dem Schluss, dass weitere Forschungen, insbesondere zur Qualitätssicherung
von migrierten SPL, Überführungskosten und Eigenschaften von Softwareartefakten notwendig sind. Wir wollen daher untersuchen, welche Kostenfaktoren bei der Überführung und beim Einsatz von SPL eine Rolle spielen und wie stark deren Einfluss jeweils ist. Weiterhin planen wir Qualitätsmetriken für migrierte SPL aufzuzeigen. In der ersten Projektphase haben wir bereits einen teil-automatisierten Migrationsprozess vorgeschlagen (1), welchen wir nun weiter ausbauen und neue Analysen integrieren wollen. Dabei wollen wir vor Allem untersuchen, ob sich nützliche Informationen, insbesondere über die Intention der Entwickler, aus weiteren Quellen, als dem Code beziehen lassen. Vielversprechende Ansätze sind hier die Analyse von Versionsverwaltungssystemen und die Analyse von existierenden Verhaltens- und Architektur-Modellen eines Systems. Des Weiteren haben wir vor, zur Steigerung des Automatisierungsgrads weitere Refactorings, wie z.B. "Move Method" einzusetzen. Um die Struktur und damit auch die Wartbarkeit der resultierenden Modularisierung zu verbessern, planen wir außerdem unseren Migrationsprozess auf Multi-Software-Produktlinien zu erweitern. Dadurch ließen sich einzelne Funktionalitäten eines Systems besser auftrennen. Ebenfalls wollen wir untersuchen, welche Granularität für migrierte Softwareartefakte am besten geeignet ist und ob annotative
Verfahren (3) für migrierte SPL Vorteile gegenüber kompositionalen Verfahren bringen können.

Website: Project-Website
Leader: Gunter Saake, Thomas Leich
Type: Drittmittelprojekt
Funded by: DFG
Funded: 01.09.2019 - 31.08.2021
Members:  Jacob Krüger
Keywords: Software product lines, clone-and-own, migration, product variants, code clones, refactoring
  • MetaProteomeAnalyzer Service (MetaProtServ)

Die Metaproteomik zielt auf die Erforschung zellulärer Funktionen komlexer Lebensgemeinschaften und ergänzt die Metagenomik and Metatranscriptomik als häufig eingesetzte Werkzeuge in der mikrobiellen Ökologie (z.B. humanes Darm-Mikrobiome, Biogasanlagen). Bioinformatische Werkzeuge, die für die Proteomik von Reinkulturen entwickelt wurden, können nicht zufriedenstellend Ergebnis benutzt werden. So führen Datenbanksuchen für die Proteinidentifizierung mit Metagenomsequenzen zu einer hohen Zahl redundanten Hits in den Suchergebnissen in Bezug auf Taxonomy und Funktion identifizierten Proteine. Für eine bessere Auswertung von Metaproteomdaten wurde deshalb MetaProteomAnalyzer (MPA) Software entwickelt. Im Rahmen von MetaProtServ soll das benutzerfreundliche Programm mit einer graphischen Oberfläche als Webservice verfügbar gemacht werden, um mehr Wissenschaftler von den Vorteilen der Metaproteomik zu überzeugen. Gezieltes Training von Anwendern und ein individueller Support sollen die Zugänglichkeit dieser Software in der wissenschaftlichen Gemeinschaft erleichtern. Die Funktionalität und die Wartungsfreundlichkeit werden für den zukünftigen Webservice sowie für eine eigenständige Version parallel basierend auf einem gemeinsamen Code und einer gemeinsamen Struktur weiterentwickelt. Die Software wird beispielsweise um Schnittstellen für den Import und Export von Metaproteomdaten (mzIdentML) erweitert. Der Webservice wird zukünftig vom de.NBI-Zentrum Bielefeld-Gießen (Center for Microbial Bioinformatics) gehostet, mit dem das de.NBI-Partnerprojekt MetaProtServ assoziiert ist.

Website: Project-Website
Leader: Gunter Saake, Dirk Benndorf
Type: Drittmittelprojekt
Funded by: Bund
Funded: 01.12.2016 bis 31.10.2021
Members: Robert Heyer,Kay Schallert,
Keywords: Bioinformatik, Metaproteomik, Proteinanalyse, Webservices, de.NBI

Other Research Projects

  • Query Acceleration Techniques in Co-Processor-Accelerated Main-memory Database Systems

Das Projekt adressiert den aktuellen Schwerpunkt von Analysen in Hauptspeicherdatenbanken auf moderner Hardware: Heterogenität der Prozessoren und deren Einbindung in die Anfrageverarbeitung. Aufgrund der Vielzahl von Optimierungen und Varianten von Algorithmen und unbegrenzte Anzahl an Anwendungsfällen, ist das Erstellen des perfekten Anfrageplanes nahezu unmöglich.
Ziel der Habilitation ist es, (1) einen umfassenden Katalog von vielversprechenden Algorithmenvarianten aufzustellen, (2) eine optimale Auswahl der Varianten im Zuge der übergeordneten Anfrageoptimierung zu erlangen, (3) als auch Lastverteilung im Co-Prozessorbeschleunigten System zu erreichen.

  1. Der Variantenkatalog umfasst als weitere Dimensionen sowohl die Ausführung auf den spaltenorientierten Daten, als auch unter Nutzung von speziellen Indexstrukturen und beinhaltet unterschiedliche Ergebnisrepräsentationen. Aus allen möglichen Dimension wird dann eine Abstraktionsschicht entwickelt, sodass ein Algorithmus unabhängig von dessen Optimierungen definiert werden kann. Dadurch soll jede Variante effizient, mit wenig redundantem Code generiert und ausgeführt werden können.
  2. Aufgrund des enormen Variantenraumes bestehend aus den Dimensionen der Varianten inklusive dem Einfluss der ausführenden Prozessoren ist die Wahl einer auszuführenden Variante nicht trivial. Ziel ist es hier lern-basierte Methoden in Hinblick auf die Eignung zur Algorithmenauswahl gegenüber zu stellen, um valide Entscheidungen zu treffen. Die zu treffenden Entscheidungen sollen des Weiteren auch ausgeweitet werden auf das Erstellen von Indexen als auch der Datenverteilung in Ziel (3).
  3. Die Lastenverteilung in Co-Prozessorbeschleunigten Systemen wird durch den Grad der Parallelisierung beeinflusst. Dieser Grad teilt sich in mehrere Dimensionen, da Datenbankoperationen in kleinere Funktionseinheiten (sog. Primitive) aufteilen können. Diese Primitive können entweder auf dem ganzen Datenbestand laufen oder partitioniert ausgeführt werden. All diese Optimierungspotentiale (unterschiedliche Granularitätsstufen und Partitionierungsgrößen) müssen analysiert und optimal gewählt werden, um unter der gegebenen und zukünftigen Anfragelast eine angemessene Performanz zu ermöglichen. Ziel ist es, ein Modell lernen zu lassen, um optimale Verteilungen und optimierte Pläne zu erstellen. Wichtig ist hierbei, dass das Modell auch Rückschlüsse auf dessen Entscheidungen zulässt, um eine Generalisierbarkeit zu erreichen.
Leader: Prof. Dr. rer. nat. Gunter Saake
Funded by: Haushalt
Members: Dr.-Ing. David Broneske
Keywords: Machine Learning for Database Systems, Varianten Tuning, Co-Prozessor-Beschleunigung
  • FeatureIDE: An Extensible Framework for Feature-Oriented Software Development
Website: FeatureIDE Project Site
Manager: Sebastian Krieter
  Dr.-Ing. Thomas Thüm (Technische Universität Braunschweig, Germany)
Funded by: Metop, Haushalt
Members: Thomas Leich; Gunter Saake
Keywords: Feature-oriented software development (FOSD), software product lines, feature modeling, feature-oriented programming, aspect-oriented programming, delta-oriented programming, preprocessors, tool support
  • Code Smells in Highly Configurable Software

Modern software systems are increasingly configurable. Conditional compilation based on C preprocessor directives (i.e., #ifdefs) is a popular variability mechanism to implement this configurability in source code. Although C preprocessor usage has been subject to repeated criticism, with regard to variability implementation, there is no thorough understanding of which patterns are particularly harmful. Specifically, we lack empirical evidence of how frequently reputedly bad patterns occur in practice and which negative effect they have. For object-oriented software, in contrast, code smells are commonly used to describe source code that exhibits known design flaws, which negatively affect understandability or changeability. Established code smells, however, have no notion of variability. Consequently, they cannot characterize flawed patterns of variability implementation. The goal of this research project is therefore to create a catalog of variability-aware code smells. We will collect empirical proof of how frequently these smells occur and what their negative impact is on understandability, changeability, and fault-proneness of affected code. Moreover, we will develop techniques to detect variability-aware code smells automatically and reliably.

Members: Wolfram Fenske
  Sandro Schulze
Keywords: Software product lines, code smells, software faults, conditional compilation, highly configurable software
  • Load-Balanced Index Structures for Self-tuning DBMS

Index tuning as part of database tuning is the task of selecting and creating indexes with the goal of reducing query processing times. However, in dynamic environments with various ad-hoc queries it is difficult to identify potentially useful indexes in advance. The approach for self-tuning index cogurations developed in previous research provides a solution for continuous tuning on the level of index configurations, where configurations are a set of common index structures. In this project we investigate a novel approach, that moves the solution of the problem at hand to the level of the index structures, i.e. to create index structures which have an inherently self-optimizing structure.

Leader: Dr.-Ing. Eike Schallehn
Type: Haushalt
Keywords: Index-structure selection, self tuning

Completed Projects

  • Adaptive Data Management in Evolving Heterogeneous Hardware/Software Systems (ADAMANT)

The database community faces an increasing diversity in their application scenarios as well as an increasing heterogeneity in the hardware landscape. This development requires database systems to be adaptable to new and maybe yet unknown applications and hardware. Currently, we lack such database systems, because these are usually designed to efficiently perform single use cases on a specific type of hardware requiring costly redesigns. In this project, we aim to provide concepts for adaptive database systems that enable users to combine new functionality and hardware devices in a plug’n’play fashion. To achieve this goal, we aim to find suitable interfaces to abstract functionality and hardware and allow for their efficient interoperability. This interoperability also allows us to apply advanced parallelization strategies that are not limited to data and functional parallelism, but can leverage cross-device parallelism. We aim to incorporate this opportunity into the query optimization process. Consequently, we increase the complexity of query optimization. To mitigate the negative effects of increased complexity, we aim to investigate strategies to distribute the optimization task across several layers and push some of them nearer to the processing devices. This strategy should also allow us to incorporate self-adaptivity capabilities of hardware devices, such as dynamic partial reconfiguration of Field Programmable Gate Arrays (FPGAs), that we plan to leverage for efficient query processing. The resulting plug’n’play functionality of our database system is a key factor to allow for adaptability and efficient processing even for future use cases.

Website: Project-Website
Leader: Gunter Saake
Type: Drittmittelprojekt
Funded by: Deutsche Forschungsgemeinschaft(DFG)
Funded: 01.10.2017 bis 30.09.2020
Members: Bala Gurumurthy,
Keywords: heterogeneous hardware, FPGA, Adaptive systems
  • Legal Horizon Scanning

Every company needs to be compliant with national and international laws and regulations. Unfortunately, staying complied is a challenging tasks based on the volume and velocity of laws and regulations. Furthermore, laws are often incomplete or inconclusive, whereby also court judgments need to be considered for compliance. Hence, companies in different sectors, e.g. energy, transport, or finance, are spending millions of dollars every year to ensure compliance each year. In this project, we want to automate the process of identifying and analyzing the impact of (changing) laws, regulations, and court judgments using a combination of Information Retrieval, Data Mining and Scalable Data Management techniques. Based on the automated identification and impact analysis, not only the costs for compliance can be reduced, but also the quality can be increased.

Keywords:Legal Horizon Scanning, Information Retrieval, Data Mining
Leader: Prof. Dr. Gunter Saake
Type: Drittmittelprojekt
Funded by: Investitionsbank Sachsen-Anhalt, Europäischer Fonds für regionale Entwicklung <br\> EFRE
Funded: 04.04.2017 - 03.04.2019  
Members: Wolfram Fenske, Sabine Wehnert
Partners: Legal Horizon AG

See also Gunter Saake's pages at the Forschungsportal Sachsen-Anhalt

  • GPU-Accelerated Join-Order Optimization

Different join orders can lead to a variation of execution times by several orders of magnitude, which makes join-order optimization to one of the most critical optimizations within DBMSs. At the same time, join-order optimization is an NP-hard problem, which makes the computation of an optimal join-order highly compute-intensive. Because current hardware architectures use highly specialized and parallel processors, the sequential algorithms for join-order optimization proposed in the past cannot fully utilize the computational power of current hardware architectures. Although existing approaches for join-order optimization such as dynamic programming benefit from parallel execution, there are no approaches for join-order optimization on highly parallel co-processors such as GPUs.

In this project, we are building a GPU-accelerated join-order optimizer by adapting existing join-order optimization approaches. Here, we are interested in the effects of GPUs on join-order optimization itself as well as the effects for query processing. For GPU-accelerated DBMSs, such as CoGaDB, using GPUs for query processing, we need to identify efficient scheduling strategies for query processing and query optimization tasks such that the GPU-accelerated optimization does not slow down query processing on GPUs.

Manager: Andreas Meister
Type: Haushalt
Keywords: GPU-Accelerated Datamanagement, Self-Tuning
  • Model-Based Refinement of Product Lines

Software product lines are families of related software systems that are developed by taking variability into account during the complete development process. In model-based refinement methods (e.g., ASM, Event-B, Z, VDM), systems are developed by stepwise refinement of an abstract, formal model.

In this project, we develop concepts to combine model-based refinement methods and software product lines. On the one hand, this combination aims to improve the cost-effectiveness of applying formal methods by taking advantage of the high degree of reuse provided by software product lines. On the other hand, it helps to handle the complexity of product lines by providing means to detect defects on a high level of abstraction, early in the development process.

Members: Fabian Benduhn
Keywords: software product lines, formal methods, refinement
  • Modern Data Management Technologies for Genome Analysis

Genome analysis is an important method to improve disease detection and treatment. The introduction of next generation sequencing techniques allows to generate genome data for genome analysis in less time and at reasonable cost. In order to provide fast and reliable genome analysis, despite ever increasing amounts of genome data, genome data management and analysis techniques must also improve. In this project, we develop concepts and approaches to use modern database management systems (e.g., column-oriented, in-memory database management systems) for genome analysis.

Project's scope:

  1. Identification and evaluation of genome analysis use cases suitable for database support
  2. Development of data management concepts for genome analysis using modern database technology with regard to chosen use cases and data management aspects such as data integration, data integrity, data provenance, data security
  3. Development of efficient data structures for querying and processing genome data in databases for defined use cases
  4. Exploiting modern hardware capabilities for genome data processing
Leader: Prof. Dr. Gunter Saake
Members: Sebastian Dorok
Keywords: genome analysis, modern database technologies, main memory database systems, column-store
  • Variability in Service-Oriented Computing
Economies of scale are achieved in service-oriented computing (SOC) by offering services to multiple consumers which demands ability to change/vary the services effectively and efficiently for consumers. Service providers want to retain consumers and maximize their profits by offering variability in services. Many solutions exist to address variability, however, each solution is tailored to a specific problem and holistic view or framework is missing to address variability issues in detail.

In this project, we focus on the variability in SOC. We classify the variability in different layers, we survey variability mechanisms from literature and summarize solutions, consequences, and possible combinations in form of a pattern catalogue. Based on the pattern catalogue, we compare different variability patterns and combinations of patterns with evaluation criteria. Our catalogue helps to choose an appropriate technique for the variability problem at hand and illustrates its consequences in SOC. We will evaluate our solution catalogue using a case study.

Members: Ateeq Khan
Keywords: service-oriented computing, software as a service (SaaS); variability; service customization; variability approaches
  • EXtracting Product Lines from vAriaNTs (EXPLANT)

Software product lines promote strategic reuse and support variability in a systematic way. In practice, however, the need for reuse and variability has often been satisfied by copying programs and adapting them as needed — the clone-and-own approach. The result is a family of cloned product variants that is hard to maintain in the long term. This project aims at consolidating such cloned product families into a well-structured, modular software product line. Guided by code-clone detection, architectural analyses, and domain knowledge, the consolidation process is semi-automatic and stepwise. Each step constitutes a small, semantics-preserving transformation of the code, the feature model or both. These semantics-preserving transformations are called variant-preserving refactorings.

Website: Project-Website
Leader: Gunter Saake, Thomas Leich
Type: Drittmittelprojekt
Funded by: DFG
Funded: 16.02.2016 - 15.02.2018
Members: Wolfram Fenske, Jacob Krüger
Keywords: Software product lines, clone-and-own, migration, product variants, code clones, refactoring
  • Software Product Line Feature Extraction from Natural Language Documents using Machine Learning Techniques

Feature model construction from the requirements or textual descriptions of products can be often tedious and ineffective. In this project, through automatically learning natural language documents of products, cluster tight-related requirements into features in the phase of domain analysis based on machine learning techniques. This method can assist the developer by suggesting possible features, and improve the efficiency and accuracy of feature modeling to a certain extent. This research will focus on feature extraction from requirements or textual descriptions of products in domain analysis. Extract the descriptors from requirements or textual descriptions of products. Then, descriptors are transformed into vectors and form a word vector space. Based on clustering algorithm, a set of descriptors are clustered into features. Their relationships will be inferred. Design the simulation experiment of feature extraction from natural language documents of products to prove that it can handle feature-extracting in terms of machine learning techniques.

Leader: Prof. Dr. Gunter Saake
Members: Yang Li
Keywords: Feature extraction, Software Product Line, machine learning, natural language documents
Type: Drittmittelprojekt
Funded by: Graduate Funding of Saxony-Anhalt
Funded: June 2016 bis May 2019
  • Efficient and Effective Entity Resolution Under Cloud-Scale Data

There might exist several different descriptions for one real-world entity. The differences may result from typographical errors, abbreviations, data formatting, etc. However, the different descriptions may lower data quality and lead to misunderstanding. Therefore, it is necessary to be able to resolve such different descriptions. Entity Resolution (ER) is a process to identify records that refer to the same real-world entity. It plays a vital role in diverse areas, not only in the traditional applications of census, health data or national security, but also in the network applications of business mailing lists, online shopping, web searches, etc. It is also an indispensable step in data cleaning, data integration and data warehousing. In recent years, the rise of the web has led to an explosion of data volume. Sequential processing ER becomes laborious even incapable when facing larger and larger data volume. Meanwhile, along with the demanding for scalability in many cases these factors make parallelism become necessary for efficient, effective and scalable ER. This project explores several popular big data processing frameworks, e.g. Hadoop MapReduce, Apache Spark, Apache Flink, to help solve ER in parallel, clarify their advantages and shortages when solving ER problems in different application scenarios.

Leader: Prof. Dr. Gunter Saake
Members: Xiao Chen
Keywords: Entity Resolution, Parallel Computing, Apache Spark, Hadoop MapReduce
Type: Drittmittelprojekt
Funded by: China Scholarship Council (CSC)
Funded: July 2014 bis June 2018
  • On the Impact of Hardware on Relational Query Processing

Satisfying the performance needs of tomorrow typically implies using modern processor capabilities (such as single instruction, multiple data) and co-processors (such as graphics processing units) to accelerate database operations. Algorithms are typically hand-tuned to the underlying (co-)processors. This solution is error-prone, introduces high implementation and maintenance cost and is not portable to other (co-)processors. To this end, we argue for a combination of database research with modern software-engineering approaches, such as feature-oriented software development (FOSD). Thus, the goal of this project is to generate optimized database algorithms tailored to the underlying (co-)processors from a common code base. With this, we maximize performance while minimizing implementation and maintenance effort in databases on new hardware.

Project milestones:

  • Creating a feature model: Arising from heterogeneous processor capabilities, promising capabilities have to be identified and structured to develop a comprehensive feature model. This includes fine-grained features that exploit the processor capabilities of each device.
  • Annotative vs. compositional FOSD approaches: Both approaches have known benefits and drawbacks. To have a suitable mechanism to construct hardware-tailored database algorithms using FOSD, we have to evaluate which of these two approaches is the best for our scenario.
  • Mapping features to code: Arising from the feature model, possible code snippets to implement a feature have to be identified.
  • Performance evaluation: To validate our solution and derive rules for processor allocation and algorithm selection, we have to perform an evaluation of our algorithms.
Leader: Prof. Dr. Gunter Saake
Members: David Broneske
Funded by: Haushalt
Keywords: heterogeneity of processing devices, CPU, GPU, FPGA, MIC, APU, tailored database operations
  • SPL Testing

Exhaustively testing every product of a software product line (SPL) is a difficult task due to the combinatorial explosion of the number of products. Combinatorial interaction testing is a technique to reduce the number of products under test. In this project, we aim to handle multiple and possibly conflicting objectives during the test process of SPL.

Website: Project-Website
Leader: Gunter Saake
Type: Drittmittelprojekt
Funded by: DAAD
Funded: 01.10.2013 - 01.10.2016
Members: Mustafa Al-Hajjaji
Keywords: Software product lines, Testing, Sampling, Prioritization
  • Secure Data Outsourcing to Untrusted Clouds

Cloud storage solutions are being offered by many big vendors like Google, Amazon & IBM etc. The need of Cloud storage has been driven by the generation of Big Data in almost every corporation. The biggest hurdle in outsourcing data to Cloud Data vendors is the Security Concern of the data owner. These security concerns have become the stumbling block in large scale adoption of Third Party Cloud Databases. The focus of this PhD project is to give a comprehensive framework for the security of Outsourced data to Untrusted Clouds. This framework includes Encrypted storage in Cloud Databases, Secure Data Access, Privacy of Data Access & Authenticity of Stored Data in the Cloud. This security framework will be based on Hadoop based open source products.

Members: Muhammad Saqib Niaz
Funded by: Higher Education Commission of Pakistan and DAAD
Funded: Oct. 2014 to Oct. 2017
Keywords: Hadoop, HDFS, Cloud Databases, Security
  • Southeast Asia Research Network: Digital Engineering

German research organizations are increasingly interested in outstanding Southeast Asian institutions as partners for collaboration in the fields of education and research. Bilateral know-how, technology transfer and staff exchange as well as the resultant opportunities for collaboration are strategically important in terms of research and economics. Therefore, the establishment of a joint research structure in the field of digital engineering is being pursued in the project "SEAR DE Thailand" under the lead management of Otto von Guericke University Magdeburg (OvGU) in cooperation with the Fraunhofer Institute for Factory Operation and Automation (IFF) and the National Science and Technology Development Agency (NSTDA) in Thailand.

Leader: Prof. Dr. Gunter Saake
Type: Drittmittelprojekt
Funded by: BMBF
Funded: 01.06.2013 - 30.05.2017  
Members: Sebastian Krieter
Partners: NSTDA
  Fraunhofer IFF
Keywords: Digital Engineering
  • Supporting Advanced Data Management Features for the Cloud Environment

Description: the aim of this project is to support advanced features of cloud data management. The project has two basic directions. The focus of the first direction is (self-) tuning for cloud data management clusters that are serving one or more applications with divergent workload types. It aims to achieve dynamic clustering to support workload based optimization. This approach is based on logical clustering within a DB cluster based on different criteria such as: data, optimization goal, thresholds, and workload types. The second direction focuses on the design of Cloud-based massively multiplayer online games. It aims to provide a scalable available efficient and reusable game architecture. Our approach is to manage data differently in multiple storage systems (file system, NoSQL system and RDBMS) according to their data management requirements, such as data type, scale, and consistency.

Members: Siba Mohammad
  Ziqiang Diao
Keywords: Cloud data management, online games, self tuning

Clustering the Cloud - A Model for Self-Tuning of Cloud Datamangement Systems

Over the past decade, cloud data management systems became increasingly popular, because they provide on-demand elastic storage and large-scale data analytics in the cloud. These systems were built with the main intention of supporting scalability and availability in an easily maintainable way. However, the (self-) tuning of cloud data management systems to meet specific requirements beyond these basic properties and for possibly heterogeneous applications becomes increasingly complex. Consequently, the self-management ideal of cloud computing is still to be achieved for cloud data management. The focus of this PhD project is (self-) tuning for cloud data management clusters that are serving one of more applications with divergent workload types. It aims to achieve dynamic clustering to support workload based optimization. Our approach is based on logical clustering within a DB cluster based on different criteria such as: data, optimization goal, thresholds, and workload types.

Type: Drittmittelprojekt
Funded by: Syrian Ministry of Higher Education and DAAD
Funded: October 2011 - March 2015
Members: Siba Mohammad

Consistent data management for cloud gaming

Cloud storage systems are able to meet the future requirements of the Internet by using non-relational database management systems (NoSQL DBMS). NoSQL system simplifies the relational database schema and the data model to improve system performances, such as system scalability and parallel processing. However, such properties of cloud storage systems limit the implementation of some Web applications like massively multi-player online games (MMOG). In the research described here, we want to expand existing cloud storage systems in order to meet requirements of MMOG. We propose to build up a transaction layer on the cloud storage layer to offer flexible ACID levels. As a goal the transaction processing should be offered to game developers as a service. Through the use of such an ACID level model both the availability of the existing system and the data consistency during the interactivity of multi-player can be converted according to specific requirements.

Type: Drittmittelprojekt
Funded by: Graduate Funding of Saxony-Anhalt
Funded: July 2012 - December 2014
Members: Zigiand Diao
  • Optimierungs- und Selbstverwaltungskonzepte für Data-Warehouse-Systeme

Data-Warehouse-Systeme werden seit einiger Zeit für Markt- und Finanzanalysen in vielen Bereichen der Wirtschaft eingesetzt. Die Anwendungsgebiete dieser Systeme erweitern sich dabei ständig, und zusätzlich steigen die zu haltenenden Datenmengen (historischer Datenbestand) immer schneller an. Da es sich oft um sehr komplexe und zeitkritische Anwendungen handelt, müssen die Analysen und Berechnungen auf den Daten immer weiter optimiert werden. Dazu allein reicht die stetig steigende Leistung von Rechner- und Serversystemen nicht aus, da die Anwendungen immer neue Anforderungen und komplexer werdende Berechnungen benötigen. Dadurch wird auch klar, daß der zeitliche und finanzielle Aufwand zum Betrieb solcher Systeme immens ist. Im Rahmen dieses Projekts soll untersucht werden, welche Möglichkeiten existieren, bisherige Ansätze zu erweitern und neue Vorschläge in bestehende System zu integrieren um die Leistung dieser zu steigern. Um dieses Ziel zu erreichen sollen Ansätze aus dem Bereich des Self-Tunings genutzt werden, denn so können die Systeme sich autonom an ständig ändernde Rahmenbedingungen und Anforderungen anpassen. Diese Ansätze sollen durch Erweiterungen wie zum Beispiel die Unterstützung von Bitmap-Indexen verbessert werden. Weiterhin soll Bezug genommen werden auf tiefere Ebenen der Optimierung, wodurch eine physische Optimierung möglich (autonom) und erleichtert werden soll.

Members: Dr.-Ing. Andreas Lübcke (now at Regiocom, Magdeburg)
Keywords: Bitmap, Data-Warehouse, Indexstrukturen, Optimierung, Self-Tuning, physisch
  • Software Product Line Languages and Tools

In this project we focus on research and development of tools and languages for software product lines. Our research focuses usability, flexibility and complexity of current approaches. Research includes tools as FeatureHouse, FeatureIDE, CIDE, FeatureC++, Aspectual Mixin Layers, Refactoring Feature Modules, and formalization of language concepts. The research centers around the ideas of feature-oriented programming and explores boundaries toward other development paradigms including type systems, refactorings, design patterns, aspect-oriented programming, generative programming, model-driven architectures, service-oriented architectures and more.

Members: Dr.-Ing. Thomas Thüm (now at Technische Universität Braunschweig, Germany)
  Reimar Schröter
  Thomas Leich
  Norbert Siegmund
Project partners: Prof. Don Batory, University of Texas at Austin, USA
  Dr. Sven Apel, University Passau
  Prof. Christian Lengauer, University Passau
  Salvador Trujillo, PhD, IKERLAN Research Centre, Mondragon, Spanien
Results: FeatureIDE, an extensible framework for feature-oriented software development
  SPL2go, a catalog of publicly available software product lines
  • SPL2go: A Catalog of Publicly Available Software Product Lines
Website: Project-Website
Manager: Dr.-Ing. Thomas Thüm (now at Technische Universität Braunschweig, Germany)
Funded by: Metop, Haushalt
Members: Thomas Thüm; Thomas Leich; Gunter Saake
Keywords: Software product lines, product-line analyses, variability modeling, feature model, domain implementation, source code, case studies
  • Reliable and Reproducible Evaluation of High-Dimensional Index Structures (QuEval)

Multimedia data, or high-dimensional data in general, have been subject to research for more than two decades and gain momentum even more in the communication technology age. From a database point of view, the myriads of gigabyte of data pose the problem of managing these data. In this course, query processing is a challenging task due to the high dimensionality of such data. In the past, dozens of index structures for high-dimensional data have been proposed and some of them are even standard-like references. However, it is still some kind of black magic to decide which index structure fits to a certain problem or outweighs other index structures.

Members: Dr. Veit Köppen
  Reimar Schröter
Keywords: High-dimensional index selection & tuning

QuEval

This is where QuEval, a framework for quantitative comparison and evaluation of high-dimensional index structures comes into play. QuEval is a Java-based framework that supports the comparison of index strucutres regarding certain characteristics such as dimensionality, accuracy, or performance. Currently, the framework contains six different index structures. However, a main focus of the framework is its extensibility and we encourage people to contribute to QuEval by providing more index structures or other interesting aspects for their comparison.

Website: Project-Website
Manager: Dr. Veit Köppen
Members: Alexander Grebhahn; Tim Hering; Veit Köppen; Christina Pielach; Martin Schäler; Reimar Schröter; Sandro Schulze
  • Nachhaltiges Variabilitätsmanagement von Feature-orientierten Software-Produktlinien (NaVaS)

A software product line is a set of software-intensive systems that share a common, managed set of features. Product lines promise significant improvements to the engineering process of software systems with variability and are applicable to a wide range of domains, ranging from embedded devices to large enterprise solutions. The goal of "Sustainable Variability Management of Feature-Oriented Software Product Lines" is to improve the research prototype FeatureIDE, an integrated development environment especially targeted at the construction of software product lines. Apart from the benefits for practitioners, this endeavor will also improve education and research.

Website: Project-Website
Leader: Prof. Dr. Gunter Saake
Type: Drittmittelprojekt
Funded by: BMBF
Funded: 01.09.2014 - 31.08.2016  
Members: Reimar Schröter
Keywords: Software product lines, Nachhaltige Softwareentwicklung, Variabilitätsmanagement, ganzheitliche Werkzeugunterstützung
  • A Hybrid Query Optimization Engine for GPU accelerated Database Query Processing

Performance demands for database systems are ever increasing and a lot of research focus on new approaches to fulfill performance requirements of tomorrow. GPU acceleration is a new arising and promising opportunity to speed up query processing of database systems by using low cost graphic processors as coprocessors. One major challenge is how to combine traditional database query processing with GPU coprocessing techniques and efficient database operation scheduling in a GPU aware query optimizer. In this project, we develop a Hybrid Query Processing Engine, which extends the traditional physical optimization process to generate hybrid query plans and to perform a cost based optimization in a way that the advantages of CPUs and GPUs are combined. Furthermore, we aim at a database architecture and data model independent solution to maximize applicability.

Type: Haushalt
Members: Sebastian Breß
Project partners: Prof. Kai-Uwe Sattler, Ilmenau University of Technology, Ilmenau;
  Prof. Ladjel Bellatreche, University of Poitiers, Frankreich;
  Dr. Tobias Lauer, Jedox AG (Freiburg im Breisgau)
Keywords: query processing, query optimization, gpu-accelerated datamangement, self-tuning

HyPE-Library

HyPE is a hybrid query processing engine build for automatic selection of processing units for coprocessing in database systems. The long-term goal of the project is to implement a fully fledged query processing engine, which is able to automatically generate and optimize a hybrid CPU/GPU physical query plan from a logical query plan. It is a research prototype developed by the Otto-von-Guericke University Magdeburg in collaboration with Ilmenau University of Technology.

Website: Project-Website
Manager: Sebastian Breß
Members: Sebastian Breß Klaus Baumann; Robin Haberkorn; Steven Ladewig; Harmen Landsmann; Tobias Lauer; Gunter Saake; Norbert Siegmund
Partner: Felix Beier; Ladjel Bellatreche; Max Heimel; Hannes Rauhe; Kai-Uwe Sattler

CoGaDB

CoGaDB is a prototype of a column-oriented GPU-accelerated database management system developed at the University of Magdeburg. Its purpose is to investigate advanced coprocessing techniques for effective GPU utilization during database query processing. It uses our hybrid query processing engine (HyPE) for the physical optimization process.

Website: Project-Website
Manager: Sebastian Breß
Members: Sebastian Breß Robin Haberkorn; Rene Hoyer; Steven Ladewig; Gunter Saake; Norbert Siegmund; Patrick Sulkowski
Partner: Ladjel Bellatreche (LIAS/ISEA-ENSMA, Futuroscope, France)
  • Minimal-invasive integration of the provenance concern into data-intensive systems

In the recent past a new research topic named provenance gained much attention. The purpose of provenance is to determine origin and derivation history of data. Thus, provenance is used, for instance, to validate and explain computation results. Due to the digitalization of previously analogue process that consume data from heterogeneous sources and increasing complexity of respective systems, it is a challenging task to validate computation results. To face this challenge there has been plenty of research resulting in solutions that allow for capturing of provenance data. These solutions cover a broad variety of approaches reaching from formal approaches defining how to capture provenance for relational databases, high-level data models for linked data in the web, to all-in-one solutions to support management of scientific work ows. However, all these approaches have in common that they are tailored for their specific use case. Consequently, provenance is considered as an integral part of these approaches that can hardly be adjusted for new user requirements or be integrated into existing systems. We envision that provenance, which highly needs to be adjusted to the needs of specific use cases, should be a cross-cutting concern that can seamlessly be integrated without interference with the original system.

Leader: Prof. Dr. Gunter Saake
Members: Martin Schäler
Funded by: Haushalt
  • MultiPLe - Multi Software Product Lines

MultiPLe is a project that aims at developing methods and tools to support development of Multi Software Product Lines (MPLs), which are a special kind of software product lines (SPLs). An SPL is a family of related programs that are often generated from a common code base with the goal of maximizing reuse between these programs. An MPL is a set of interacting and interdependent SPLs.

Website: Project-Website
Leader: Prof. Dr. Gunter Saake
Type: Drittmittelprojekt
Funded by: Deutsche Forschungsgemeinschaft (DFG)
Funded: 01.03.2012 - 28.02.2014
Members: Reimar Schröter
Keywords: Software product lines, multi product lines, program interfaces
  • Analysis Strategies for Software Product Lines

Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a set of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, testing, and formal verification, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis.

The emerging field of product-line analysis techniques is both broad and diverse such that it is difficult for researchers and practitioners to understand their similarities and differences (e.g., with regard to variability awareness or scalability), which hinders systematic research and application. We classify the corpus of existing and ongoing work in this field, we compare techniques based on our classification, and we infer a research agenda. A short-term benefit of our endeavor is that our classification can guide research in product-line analysis and, to this end, make it more systematic and efficient. A long-term goal is to empower developers to choose the right analysis technique for their needs out of a pool of techniques with different strengths and weaknesses.

Website: Project-Website
Manager: Thomas Thüm
Type: Haushalt
Members: Thomas Thüm; Sven Apel; Christian Kästner; Ina Schaefer; Gunter Saake
Keywords: Product-line analysis, software product lines, program families, deductive verification, theorem proving, model checking, type checking
  • ViERforES-II (Dependable systems, Interoperability)

Software-intensive systems are becoming more and more important in an increasing number of traditional engineering domains. Digital Engineering is a new emerging trend that meets the challenge to bring together traditional engineering and modern approaches in Software- and Systems Engineering. Engineers in the traditional domains are confronted with both the usage of software systems in a growing amount and also with the development of software-intensive systems. Therefore, Software- and Systems- Engineering play a growing role in many engineering domains. While functional properties of software systems are often included in the development process, non-functional properties of safety and security and their early inclusion in the development process are not respected sufficiently.

Members: Dr. Veit Köppen
  Janet Siegmund (geb. Feigenspan)
  Norbert Siegmund
Keywords: High-dimensional index selection & tuning

ViERforES-II - Dependable systems

The project deals with with security aspects in embedded systems regarding threats, which can be caused by, among others, malware. Another important aspect is to find security leaks already on source-code level, for which cognitive processes that are related to program comprehension are important. One goal is to evaluate factors that allow us to understand abilities of developers, but also the risk potential of projects.

Website: Project-Website
Leader: Prof. Dr. Gunter Saake
Type: Drittmittelprojekt
Funded by: BMBF
Funded: 01.01.2010 - 30.09.2013
Members: Janet Siegmund (geb. Feigenspan)
Keywords: Empirical software engineering

ViERforES-II - Interoperability

Ensuring interoperability of cooperating embedded systems is one of the key challenges to solve to build complex and highly interactive ubiquitous systems. To this end, we have to consider different levels of interoperability: syntactical, semantic, and non-functional interoperability. In ViERforES-I, we developed solutions for the first two levels using software product lines and service-oriented architecture. In ViERforES-II, we focus on techniques to determine non-functional properties of customizable software deployed on embedded systems. We develop means to model, measure, and quantify non-functional properties, such that we can compute an optimal configuration of all cooperating software systems. This way, we ensure that embedded systems are interoperable regarding performance, energy consumption, and other quality attributes.

In the second line of work, we combine distributed cooperating simulations using OpenGL. The goal is to support engineers during the product development by providing an integrated view on a product in the virtual reality based by merging the graphics stream of several simulations. Moreover, with 3D cameras, we aim at placing the engineer inside a simulation. Through interaction with the 3D product, this allows for the simulation of early training and maintenance tasks.

Website: Project-Website
Leader: Prof. Dr. Gunter Saake
Type: Drittmittelprojekt
Funded by: BMBF
Funded: 01.01.2010 - 30.09.2013
Members: Norbert Siegmund
  Maik Mory
Keywords: non-functional properties, optimization, cooperating simulations, openGL, interoperability

Past Conference

Database Systems for Business, Technology, and Web (BTW)

The 15th BTW conference on "Database Systems for Business, Technology, and Web" (BTW 2013) of the Gesellschaft für Informatik (GI) will take place from March 11th to March 15th, 2013 at the Otto-von-Guericke-University of Magdeburg, Germany.

Website: Conference-Website

Past Workshops

 

Last Modification: 12.02.2021 - Contact Person: