The Tenth International Conference on Advances in Databases, Knowledge, and Data Applications

DBKDA 2018

May 20, 2018 to May 24, 2018 - Nice, France

Deadlines

Submission

Feb 07, 2018

Notification

Mar 11, 2018

Registration

Mar 24, 2018

Camera ready

Apr 03, 2018

Deadlines differ for special tracks. Please consult the conference home page for special tracks Call for Papers (if any).

Publication

Published by IARIA Press (operated by Xpert Publishing Services)

Archived in the Open Access IARIA ThinkMind Digital Library

Prints available at Curran Associates, Inc.

Authors of selected papers will be invited to submit extended versions to a IARIA Journal

Indexing Procedure

Affiliated Journals

DBKDA 2018 - The Tenth International Conference on Advances in Databases, Knowledge, and Data Applications

May 20, 2018 - May 24, 2018

DBKDA 2018: Tutorials

T1. The Architectures of the Triple-stores
Prof. Dr. Iztok Savnik, University of Primorska, Slovenia

Triple-store (TS) systems are used for storing and managing triples composed of the subject, the predicate and the object. The triple data model is a fundamental data model capable of expressing graphs, mathematical relations, instances of logic predicates, as well as, arbitrary data structures. The triple data model is of particular importance for the representation of the data in modern knowledge-based systems ranging from Internet search engines to common-sense reasoner systems. The Resource Description Framework (RDF) together with RDF Schema dictionary is most often used as the basic data model of triple-stores.

On the one hand, TS-s are similar to relational DBMS-s. In the simplest form, TS is a relation defined on three attributes dedicated for the representation of general graphs. Similarly to the relational systems, the schema of TS is stored in TS itself. It can be used for the distribution of data, the optimization of queries, and the like. Furthermore, the simplicity and uniformity of the triple data model allows for the solutions to some problems that are hard in RDBMS-s. For example, the automatic distribution of TS is possible not only using hash-based partitioning but also using semantics-aware partitioning algorithms.

On the other hand, TS-s are also close to the modern noSQL DBMS-s. These are scalable, highly available, fault tolerant, provide automatic distribution and replication of data, and, can store up to several Peta bytes of data. The price for these nice characteristics is a simple key-value data model and a very weak query model. The triple-stores are more complex than the key-value data stores since they must allow fast access based on the combinations of the three keys. However, most solutions introduced by noSQL systems are applicable to TS systems.

This tutorial presents the architectural solutions used in the recent triple-stores. The storage layer of TS is overviewed first. The structures used for storing triples are the relational tables, special indexes, property tables, key-value indexes, and, (main-memory) adjacency-list representation. TS-s are often distributed by using the variants of hash-based partitioning, locality-based shading, or, schema-based partitioning. The query execution systems of TS are presented second. The dynamic and static query execution approaches, such as, the join reordering, join ahead pruning, cost-based and dynamic programming optimization algorithms, stream-based distributed processing, and, distributed caching, will be overviewed.


 
T2.  How to build a Search-Engine with Common Unix-Tools
Prof. Dr. Andreas Schmidt, University of Applied Sciences Karlsruhe, Germany

Description: The purpose of this tutorial is twofold. On the one hand it should explain the basic functionality of a search engine and on the other hand the tools available in every Unix-shell should be used to realize such a search engine in a prototype. The search engine will be realized by a series of so-called filters, which are loosely linked by pipes. Besides the actual indexing and query processing, the crawling aspect is also covered. The tutorial contains several practical exercises to be performed by the participants on their computers.

Keywords: Inverted index, stemming, tf*idf, unix-shell, pipes and filters

Audience: Software developers who want to know more about the software tools provided by their shell.

Prerequisites: Linux and Mac users have the required tools already installed on their computer. Windows users have to install cygwin (https://www.cygwin.com/).



T3. Indoor Positioning by the Fusion of Wireless Metrics and Sensors
Prof. Dr. Özgür Tamer, Dokuz Eylul University | Nucleo R&D Ltd. Turkey

Indoor positioning has become a hot topic in recent years due to the market applications depending on correct estimation of a user or an object in a closed environment, like a shopping mall or a production plants. Unlike the outdoor environment, indoor applications lack a solid positioning system like GPS. Indoor positioning systems use sensors and communication infrastructures to locate objects or users in indoor environments. Different techniques, wireless technologies, sensor applications and mechanisms have been proposed by researchers to provide indoor positioning services or to improve the services provided to the users.

Wireless infrastructures like Wi-Fi, Bluetooth or ZigBee provide several performance metrics for proper operation of the infrastructure. Some of these metrics provide time based information like time of arrival (TOA), time difference of arrival (TDOA) while some other provide amplitude based information like the received signal strength indicator (RSSI). These techniques rely on estimating the distance of the object with respect to multiple transceivers and use triangulation to locate the user. Direction of Arrival (DoA) estimation of the received wireless signals is also employed for improving the performance of these algorithms.

Most widely used sensors used in indoor localization are the inertia sensors. Techniques using the inertia sensors estimate the displacement of the objects at different time intervals and evaluate the final displacement by adding these data. There are also laser transceiver, LIDAR or encoder based sensor applications proposed in the literature.

All the techniques presented above have superiority over one another under different environmental conditions. Therefore fusion of the result of these techniques according to the environmental fingerprint information provides better results.

In this presentation, methods covering these techniques will be presented. Some simulation and measurement results based on the studies conducted during the TUBITAK project 114E659 will be presented.


Technical Co-Sponsors and Logistic Supporters