Submit a Paper

Propose a Workshop

Propose a Mini Symposium

The Fifth International Conference on Communications, Computation, Networks and Technologies

INNOV 2016
August 21 - 25, 2016 - Rome, Italy


Tutorials

T1. Fog Computing, Mobile Edge Computing, Cloudlets - Which One?
Prof. Dr. Eugen Borcoci, University Politehnica - Bucharest, Romania

T2. Topic: How to Tell Apart the Good from the Bad: Setting Thresholds in Software Engineering
Prof. Dr. Luigi Lavazza, Università degli Studi dell’Insubria, Italy

 

Detailed description

 

T1. Fog Computing, Mobile Edge Computing, Cloudlets - Which One?
Prof. Dr. Eugen Borcoci, University Politehnica - Bucharest, Romania

Pre-requisites: general knowledge on IP networking architectures, protocols, introductory knowledge on SDN,  NFV, Cloud computing and 4G/5G technologies.

Cloud technologies offer today a rich range of services (SaaS, PaaS, IaaS, XaaS) to large communities of users, in a flexible and cheap way. However, the current approach, i.e., clouds with processing and storage power placed in large data centers only, have limitations in meeting the computing and intelligent networking demands of many new distributed applications, like smart cities, IoT,V2X, M2M, etc. Therefore, local/edge computing solutions are recently investigated, aiming to meet special requirements related to latency, mobility, integration of  local multimedia contextual information in real time, processing load reduction, power saving in end points, network reliability and resiliency improvement, dynamic resource allocation at the edge of the network, etc. Several complementary and partially similar approaches have been proposed.

The Fog/edge  distributed computing architectures extends the cloud concepts, providing to end-users -  data, compute, storage, and application services, based on resources hosted at the network edge, access points or end devices such as set-top-boxes and even terminal devices. Fog computing offers significant advantages, important for 5G networks, such as data movement reduction across the network resulting in reduced congestion and latency. It eliminates the bottlenecks resulting from centralised computing systems.

The European Telecommunications Standards Institute (ETSI) launched an industry specification for Mobile Edge Computing (MEC) in Sept 2014,  aiming to bring computational power into Mobile RAN (radio access network) to promote virtualization of software at the radio edge. MEC enables the edge of the network to run in an isolated environment from the rest of the network and creates access to local resources and data. ETSI is developing a system architecture and standardizing a number of APIs for essentially mobile edge cloud computing.

The Cloudlet concept has been proposed at Carnegie Mellon University (CMU). A cloudlet can be seen as a middle tier of a 3-tier hierarchy: mobile device – cloudlet – cloud, or,  in other words, as a "data center in a box" whose goal is to "bring the cloud closer"’.

While the above three approaches emerged relatively independent of each other, they are complementary and partially overlapping. This tutorial discusses such distributed cloud computing solutions, identifying the similarities and differences, use cases and also areas requiring further research in order to get Fog, Mobile Edge Computing and Cloudlets concepts adopted by the industry as extensions to the classical Cloud computing.

 

T2. Topic: How to Tell Apart the Good from the Bad: Setting Thresholds in Software Engineering
Prof. Dr. Luigi Lavazza, Università degli Studi dell’Insubria, Italy

Estimation plays a fundamental role in software development. For instance, during software development managers need to decide if a piece of software needs to undergo software quality improvement activities or not. Taking correct decisions is crucial, because applying quality improvement initiatives to software parts that are already of good quality implies wasting resources. On the contrary, not applying quality improvement initiatives to software parts that need them implies that software of poor quality is released, which often implies huge correction costs and possibly image loss.

To take correct decisions, managers need quantitative models that let them assess the quality of software. However, models are not sufficient. Managers need thresholds, so that software modules that are below the threshold are classified of acceptable quality, while modules above the thresholds are classified as 'bad', hence they have to undergo some quality improvement initiative.

In the literature, several methods have been proposed to set thresholds on software measures. However, several method fail to connect the measures of internal software properties (like lines of code, McCabe complexity, coupling and cohesion measures, etc.) to externally perceivable software qualities (like the faultiness of a software module). As a result, the proposed thresholds are often ineffective.

In the proposed tutorial, the following topics are addressed:
- The ineffectiveness of thresholds that ignore externally perceivable qualities is shown.
- It is shown how to build quality models based on easily measurable internal code measures.
- A few techniques to set thresholds are shown. In particular, we shall see:
a) how to set risk-averse thresholds based on the quality function;
b) how to establish optimistic and pessimistic thresholds, which are able to isolate uncertain cases.

The analysis of empirical data from real-life projects will be used throughout the tutorial.

 
 

Copyright (c) 2006-2016, IARIA