Submit a Paper

Propose a Workshop

Propose a Mini Symposium

The First International Conference on Neuroscience and Cognitive Brain Information

BRAININFO 2016
November 13 - 17, 2016 - Barcelona, Spain


Tutorials

T1. Mathematical Modeling and Control of Biologically Inspired Uncertain Motion Systems with Adaptive Features
PD Dr.-Ing. habil. Dipl.-Math. Carsten Behn, Ilmenau University of Technology, Germany

T2. Object Detection, Tracking and Recognition in Complex Environmental Conditions
Prof. Dr. Vijayan K. Asari, University of Dayton, USA

T3. Secure V2X Communications
Prof. Dipl.-Ing. Markus Ullmann, Bundesamt für Sicherheit in der Informationstechnik (BSI) - Bonn, Germany

T4. Advanced Computing and Data-Centricity: Saving Value and Taming Complexity
Prof. Dr. Claus-Peter Rückemann, Leibniz Universität Hannover / Westfälische Wilhelms-Universität Münster / North-German Supercomputing Alliance (HLRN), Germany

 

Details

 

T1. Mathematical Modeling and Control of Biologically Inspired Uncertain Motion Systems with Adaptive Features
PD Dr.-Ing. habil. Dipl.-Math. Carsten Behn, Ilmenau University of Technology, Germany

The tutorial is devoted to the analysis and modeling of biologically inspired systems. These biological
systems offer a complex and adaptive functionality, which has to be transferred into mathematical
models. The developed models are treated analytically as far as possible, and then handled by means of numerical simulations with respect to some topics in order to reproduce the adaptivity of the biological systems in these models.

At first, the development of new control strategies and sensor models is motivated during the analysis of the functional morphology of vibrissal sensor systems. The vibrissa receptors are in a permanent state of adaption to filter the perception of tactile stimuli. Using a simple linear model of a sensory system its parameters are supposed to be unknown, due to the complexity of biological systems. Adaptive controllers are considered which compensate unknown permanent ground excitations. Classical adaptors suffer from a monotonic increase of the control gain parameter, thereby possibly paralyzing the sensor’s capability to detect future extraordinary excitations. The existing adaptive controllers from literature are improved with respect to performance, sensitivity and capabilities. Various modifications are made. The working principle of each controller is shown in numerical simulations which prove that these controllers in fact work successfully and effectively in various simulations.

The second part is to present the theoretical context needed to examine the mechanical and in particular the dynamical characteristics of the biological vibrissa. The theoretical aspects are to be interpreted with respect to the biological vibrissa, as well as for a technical implementation of it.  Inspired by this animal sensory system, several types of mechanical models are developed based on findings in the literature. The first focus is on (multi) rigid body systems. The investigations show that adaptive control is promising in application to vibrissa systems: it allowed to describe the two main modes of operation of vibrissae – passive and active one. Closer to the biological paradigm, we then investigate a separation of an extra receptor from the vibrissa. This resulted in the observation that investigated control torques, which serve as an excitation to the receptor, did not offer further information. Therefore, we switch back to the analysis of a single vibrissa and increase its degree of freedom. We presented three models, taking into account the viscoelastic support in the mystacial pad. The muscles (extrinsic and intrinsic) enabling the animals to whisk actively are simulated by adaptive control algorithms. Then, the focus is on the modeling of the vibrissa as a continuous system: bending vibrations of beams. There, the main focus of the studies lay on the examination of the compliance of the tactile hair, its viscoelastic support and its conical form influence the oscillation characteristics of the vibrissa.

 

T2. Object Detection, Tracking and Recognition in Complex Environmental Conditions
Prof. Dr. Vijayan K. Asari, University of Dayton, USA

Intelligent visual surveillance is becoming more popular in applications such as human identification, activity recognition, behavior analysis, anomaly detection, alarming, etc. This rapidly growing field involves data acquisition and processing of video captured from long range and wide viewing angle sensors. Aircrafts, many times unmanned, flying at very high altitude capture high resolution data of the ground below. Today’s camera technology can capture frames with Giga-pixel resolution at a rate of 60 frames per second. As camera capabilities continue to improve, higher resolution data will be captured at faster rates. The data is captured from high altitudes and therefore that may cover hundreds of square miles within its field of view.

Detection, tracking, and recognition of objects in a wide area surveillance environment have been an active research area in the past few decades. Object motion analysis and interpretation are integral components for activity monitoring and situational awareness. Real- time performance of these data analysis tasks in a very wide field of view is an important need for monitoring in security and law enforcement applications. Although huge strides have been made in the field of computer vision related to technology development for automatic monitoring systems, there is a need for robust algorithms that can perform detections of objects and individuals in a surveillance environment. This is mainly because of certain constraints such as partial occlusions of the body, heavily crowded scenes where objects are very close to each other, etc. We present a robust automated system which can detect and identify people by automated face recognition in a surveillance environment and track their actions and activities by a spatio-temporal feature tracking mechanism.

When processing WAMI (Wide Area Motion Imagery) data several preprocessing techniques can be applied to significantly improve the visibility of the imagery. The objective of this research is to improve visibility in low/non-uniform lighting conditions for wide area surveillance applications and to enhance features to improve the performance of automatic object detection/tracking/recognition algorithms on wide area surveillance data. Some illustrations on the improvements of visibility are shown below. A non-linear enhancement technique is used for increasing the visibility in dark/shadow regions by simultaneously illuminate dark shadowy regions of the scene as well as compress (dim) over-exposed regions. Visibility improvement, contrast enhancement and features enhancement of images/video captured in bad weather environment is very useful for many outdoor computer vision applications like video surveillance, long range object detection, recognition and tracking, self-navigating ground and air-based vision systems etc.,

Usually, in bad weather environments like haze and fog, the captured scenes suffer from poor visibility, contrast and distorted color. However, conventional image and contrast enhancement techniques works well for some scenes but are not suitable for images with different depth regions because the haze and fog thickness depends on the depth of the scene. The accurate thickness of haze or fog from a single image in these bad weather environments is still a challenging task, but the approximate relative thickness of haze or fog is obtained from the low frequency information of the scene.

Rain is a complex dynamic noise that hampers feature detection and extraction from videos. The presence of rain streaks in a particular frame of video is completely random and cannot be predicted accurately. In this project, a method based on phase congruency is used to remove rain from videos. This method makes use of the spatial, temporal and chromatic properties of the rain streaks in order to detect and remove them. The basic idea is that any pixel will not be covered by rain at all instances. Also, the presence of rain causes sharp changes in intensity at a particular pixel. The directional property of rain streaks also helps in the proper detection of rain affected pixels.

Research in autonomous detection of machinery threats on oil and gas pipeline right-of- ways (ROWs) in wide area imagery is an important task to protect our pipeline infrastructure. A great amount of effort is required for human analysts to identify threats manually in thousands of images captured by small aircrafts or Unmanned Aerial Vehicles (UAVs). Therefore, there is a need for a full-fledged intrusion detection system to automate this process. In order to provide robust monitoring of threats or intrusions to pipeline ROWs, the technology should be capable of addressing the challenges due to image resolution, sensor noise, lighting conditions, partial occlusions, and various heights and viewing angles between the objects and sensors. We present an automatic object detection system that can detect potential threat objects on pipeline ROWs to aid the human analysts for threat evaluation and subsequent actions. Our real-time automated airborne monitoring system can detect, recognize, and locate machinery threats such as construction equipment entering the pipeline ROWs.

This tutorial discusses the following specific topics:

  • Nonlinear image enhancement
  • Haze/fog removal
  • Rain removal
  • Object detection
  • Object recognition
  • Object tracking
  • Applications

 

T3. Secure V2X Communications
Prof. Dipl.-Ing. Markus Ullmann, Bundesamt für Sicherheit in der Informationstechnik (BSI) - Bonn, Germany

Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure communication (V2I) (consolidated V2X) has been discussed intensively in recent years. To specify use cases and prepare the necessary standardizations for the V2X communication in Europe the Car2Car Communication Consortium was initiated by European vehicle manufacturers, equipment suppliers, research organisations and other partners. The results of the technical discussions are a collection of ETSI (European Telecommunications Standard Institute) standards for V2X in Europe. These standards specify the architecture for Intelligent Transport Systems (ITS), the communication concept, message types, ITS stations types (e.g., ITS vehicle station, ITS roadside station, …) as well as a new format for necessary cryptographic certificates. 

The wireless communication technology for cooperative V2X communication is based on the IEEE 802.11p standard. A frequency spectrum in the 5.9 GHz range has been allocated on a harmonized basis in Europe in line with similar allocations in US.

In the meantime, pilot projects are performed in Europe, e.g., the C-ITS corridor project Amsterdam-Frankfurt Vienna to support the enrolement of the V2X technology. 

Outline

  • Technical introduction of the V2V technology according ETSI
    • History of the V2V communication
    • Role of the V2V communication in the context of automated driving
    • Broadcast technology
    • Message types
    • Specified security and privacy mechanisms
    • Differences of V2V in Europe and US
  • Secure V2I communication from an infrastructure perspective (ITS roadside station)
    • Attacks on ITS roadside stations
    • Countermeasures
    • Security concept and key management
    • PKI concept for ITS stations
  • Shortcomings of the V2V communication
    • Security
    • Privacy
  • Outlook 

 

T4. Advanced Computing and Data-Centricity: Saving Value and Taming Complexity
Prof. Dr. Claus-Peter Rückemann, Leibniz Universität Hannover / Westfälische Wilhelms-Universität Münster / North-German Supercomputing Alliance (HLRN), Germany

Data is the core reason for computing. Investments in data are multi-layered. Extravagant data may require expensive, specialized solutions. Caring for data and developing data may require sustainable and long-term solutions. The value of data hides in multi-layers, e.g., some data have to be created over long periods of time and cannot be created a second time. And, data does have more than economical value. Result-oriented solutions can benefit from data complexity. Therefore, complexity can also carry values.

Infrastructures and advanced computing architectures are essential means for handling data but their implementation can matter in many ways, e.g., for advanced implementations and workflows.

Given the idea that the value of data should be saved and complexity be handled, then focus questions are:

* What does that mean for 'holistic' scenarios?
* Where to target value and where complexity?
* What is the meaning of centricity and are there reasons to think about centricity details?
* What is the discipline/users' view? Are there choices and how?
* What are examples of knowledge creation, discovery, and workflows?
* What are benefits and tradeoffs and how can issues like long-term relevant data, complexity, and portability be handled?
* Which architectures and scenarios can be considered?
* Are there different types of Big Data and can Big Data be data-centric?
* Which cases require high end solutions and which High Performance Computing architectures practical?
* What are the consequences of centricity?

It is beneficial to take a closer look at the details of the respective relations and conditions. Centricity, as in "data-centric", "knowledge-centric", and "computing-centric", is a significant aspect for understanding, choosing, and creating advanced solutions.

This tutorial focuses on aspects of data and computing, especially, different types of data and organization, different types of computing and storage architectures, and different methods and goals.

The tutorial presents and discusses real examples of advanced implementations, introduces in architectures and operation, and tries to discuss consequences and solutions.

It is intended to have a concluding dialogue with the participants on practical scenarios and experiences.

This tutorial is addressed to all interested users and creators of data, disciplines, geosciences, environmental sciences, archaeology, social and life sciences, as well as to users of advanced applications and providers of resources and services for High End Computing. There are no special informatics prerequisites or High End Computing experiences necessary to take part in this tutorial.

 
 

Copyright (c) 2006-2016, IARIA