Submit a Paper

The First International Conference on Sensor Device Technologies and Applications

SENSORDEVICES 2010

July 18 - 25, 2010 - Venice/Mestre, Italy


Tutorials

T1:  New Technological Platform for Digital and Smart Sensor and Systems Integration
Prof. Dr. Sergey Yurish, Universitat Oberta de Catalunya - Barcelona / IFSA, Spain

T2:  Dependability in Mobile Computing Systems
Dr. Sarmistha Neogy, Jadavpur University- Kolkata, India

T3: Content-aware Networking: Future Internet Perspective
Prof. Dr. Eugen Borcoci, University Politehnica Bucharest, Romania

T4: Evaluating dependability metrics of critical systems: Monte Carlo techniques for rare event analysis
Dr. Gerardo Rubino, INRIA, France

 

DETAILS

T1:  New Technological Platform for Digital and Smart Sensor and Systems Integration
Prof. Dr. Sergey Yurish, Universitat Oberta de Catalunya - Barcelona / IFSA, Spain

Abstract: Tutorial describes modern developments and trends in the field of smart sensor systems and digital sensors design. Its background is based on programmable parameter-tofrequency (time) converters as a smart sensor’s core and structural-algorithmic methods for data extraction in order to move from a traditional analog-to-digital conversion to alternative frequency (period, duty-cycle, time interval)-to-digital conversion. Working in the frequencytimesignal domain simplifies design, and obviates some technical and technological problems, due to the properties of frequency as informative parameter of sensors and transducers.

After a general overview of conversion methods, modern smart, digital and quasi-digital (with frequency, period, duty-cycle, pulse-width modulated (PWM), phase-shift, pulse number, etc., output) sensors, smart sensor systems details are discussed including: sensors, ADC (frequency-to-digital conversion based on advanced methods for frequency-time parameters measurements with adaptive possibilities), communication and interfacing. A systematic approach towards the practical design of low-cost high-performance smart sensors systems with self-adaptation and self-identification possibilities is presented. The proposed design approach and technological platform for integration compatible with MEMS, system-on-chip (SoC) and system-in-package (SiP) implementation. It is based on the novel integrated circuits such as the Series of Universal Frequency-to-Digital Converters, Universal Sensors and Transducers Interface, and can overcoming current hurdles to truly widespread deployment of smart sensors and systems. Different examples of sensors systems will be given and discussed in details.

This tutorial is suitable for engineers and researchers who design and investigate various digital and intelligent sensors, data acquisition, and measurement systems. It should be also useful for sensors manufacturers, graduate and post graduate students.

T2:  Dependability in Mobile Computing Systems
Dr. Sarmistha Neogy, Jadavpur University- Kolkata, India

Mobile Computing Systems (MCS) are gaining importance due to a variety of reasons. One of the main reasons is ease of use. However, this reason itself is also the source of one of the problems in mobile systems. Any mobile device may get attached to a network thereby becoming a potential source of security threat to the system. Frequent disconnections and related problems again become other potential sources of failure in the system. These problems require special attentions and research works generally propose solution for the problems in isolated way.

This tutorial will discuss in details the attributes of distributed computed system and methods to achieve these attributes to the fullest in distributed mobile computing systems. The characteristics and challenges for designing mobile computing system will be discussed thread bare.

The attributes availability, reliability, safety and security encompass dependability of a system. Availability and reliability are closely interrelated. Though there are several techniques for achieving reliable softwares, it is found that fault tolerance is the most effective methods for obtaining high reliability in MCS. Checkpointing and recovery are generally the fault tolerance methodologies adopted for MCS because of the nature of failure which is temporary most of the time. Reliability models for distributed systems could not be adopted in MCS since number of nodes will vary. Researches show that non-homogeneous Poisson Process (NHPP) distribution resembles similarity to MCS. Reliability modeling using NHPP for MCS is explored. We have also considered different mobility models and their effects on reliability.

The components of security include confidentiality, integrity and availability. Security steps required for validation of nodes upon joining the system may extend beyond the general password verification. Requirement of a secured system may be to provide confidentiality which must ensure non disclosure of information beyond a certain limit. Techniques for reliable and secured transmission using digital signature and or encryption will be discussed in details.

Comparison among the existing techniques of achieving fault tolerance (which covers reliability and availability) and techniques adopted for data encryption to provide confidential and secure transmission over otherwise unreliable channels are studied and analyzed.

Research contributions in the field of fault tolerance, reliability and security of MCS of the tutorial speaker and her students will also be critically analysed. Other measures for plugging numerous security threats will be discussed.

This tutorial aims to describe the meaning and methods of achieving dependability, characteristics of mobile computing systems, problems that are pertinent to such systems, research works proposed for providing solutions that would ensure overall system reliability and security.

T3: Content-aware Networking: Future Internet Perspective
Prof. Dr. Eugen Borcoci, University Politehnica Bucharest, Romania

This tutorial will present the recent trends in developing Future Internet (FI) technologies, related to content awareness of the networks. While debates on the evolutionary or a “clean slate” approach for FI development are open, it is largely recognized that FI will be much more - user, service and content - centric than the current one.  Many experts and groups agree that “content” will be seen and leveraged as main entity in FI, instead of the traditionally - used  “location”. This will have impact while developing virtualization overlays on the top of the network infrastructure level, or even will conduct to modify the main IP routing and forwarding paradigms themselves.

Given the recent significant increase in needs for multimedia communications, new architectures and technologies for converged and scalable networking are necessary, to support the delivery of multimedia content and services, dynamically and policy-based optimised. The content aware networking (CAN) and network aware applications (NAA) paradigms are of high interest in this context. Content centric networking (CCN) is also a new revolutionary view, complementary and related to CAN approach. These technologies are hoped to enable multiple user roles as content producer, user or manager. The new approach should take into account the content and adaptation needs, the user contexts, requirements and social relational network for a variety of contents, services that may include home management, applications, locations and mobility scenarios. On the other side, these concepts break the traditional OSI-TCP/IP strong separation between the transport and applications/services, thus creating many open issues for research.

An example of such a complex CAN oriented architecture is given, currently in research and development in the framework of the  IP FP7 european project  ALICANTE „ MediA Ecosystem Deployment through Ubiquitous Content-Aware Network Environments”, no.: 248652, (2010-2013).

T4: Evaluating dependability metrics of critical systems: Monte Carlo techniques for rare event analysis
Dr. Gerardo Rubino, INRIA, France

In high dependable systems, and in particular, in critical systems, the system’s failure is (or should be) a rare event. For instance, a representative specification for the probability of failure of a critical system is to be less than, say, one in a billion for an electric (e.g. nuclear) plant, or for the control subsystem of an aircraft. With a crude Monte Carlo method (that is, with a “direct” simulation of the system under study) we will need, on the average, a billion independent replications to get just one occurrence of the failure, and much more of we want to get a confidence interval with a valid coverage. This is in general impossible, except for pretty simple models. Fortunately, more sophisticate techniques exist capable of “accelerating” the occurrence of the rare event, or somehow putting an “emphasis” on the combination of events that can bring to it. Among them, the two most important families are called Importance Sampling and Importance Splitting. The first one allows to work on a different model where the rare events become common. The second one allows to select the most “promising” trajectories of the model, allowing then to reach more rapidly the rare “area”. This tutorial presents the basics concepts associated with rare event analysis in dependability, and describes through different examples the main features of both families of methods.

 
 

Copyright (c) 2006-2010, IARIA