The Thirteenth International Multi-Conference on Computing in the Global Information Technology

ICCGI 2018

June 24, 2018 to June 28, 2018 - Venice, Italy

Deadlines

Submission

Mar 07, 2018

Notification

Apr 09, 2018

Registration

Apr 23, 2018

Camera ready

May 04, 2018

Deadlines differ for special tracks. Please consult the conference home page for special tracks Call for Papers (if any).

Publication

Published by IARIA Press (operated by Xpert Publishing Services)

Archived in the Open Access IARIA ThinkMind Digital Library

Prints available at Curran Associates, Inc.

Authors of selected papers will be invited to submit extended versions to a IARIA Journal

Indexing Procedure

Affiliated Journals

ICCGI 2018 - The Thirteenth International Multi-Conference on Computing in the Global Information Technology

June 24, 2018 - June 28, 2018

ICCGI 2018: Tutorials

T1. The Deployment of a New Component in a Mature distributed Environment
Prof. Dr. George Blankenship, George Washington University, USA

Consider a complex distributed processing environment with N systems performing interrelated computations. K1 systems perform independent computations; the computations may use local data and may use data from other systems. K2 systems collect data collected by the independent systems for an overall computation. The processing model is built upon the expectation that N systems create an environment that operates reliably, consistently, and without error. This environment has an autonomous operation aspect; that is, a large percentage of exceptional conditions are handled without manual intervention. The introduction of a new component must be accomplished with extreme care so that the existing distributed environment is not subject to perturbations. The integration approach must be based upon the assumption that the existing system is operationally correct. This approach simplifies the integration effort since any other approach would force a re-analysis of the whole environment.

This tutorial will explore the integration of a new system into an existing mature distributed processing environment. The basis for any integration is a proper System Development Life Cycle (SDLC) approach. The SDLC starts with a definition of the new system; the new system is to provide a new capability to the existing environment. The focus is on the new capability. The existing environment is generally considered as a distraction from the addition; this approach is a trap. The key to a successful addition is the inclusion of the integration requirements in the first step of the SDLC. The importance and attention given to the integration requirements must be on a par with the new capability requirements. The new system must be treated as a modification to the existing environment.

The first phase of any newly instantiated SDLC is the creation of a requirements document. This document is the basis for a functionality specification and the validation plan for the new system. A critical complication is the source documentation used to define the current environment. The complication is that every specification document is stale the minute it is published; changes are not promptly included in documents. The complication can be mitigated by building a reference engine to verify the integration requirements and serve as a tool to validate the readiness of the new system’s ability to move forward in the SDLC phases. The second phase of the SDLC is the testing performed to validate the correctness of the new system. There is a critical complication in this phase also. The existing systems can only send correct data and cannot generate exceptional conditions. Once again the complication can only be resolved by the development of a special tool.

In the world of additions to an existing system there are two truths. The first is any existing lab cannot create special message sequences. The second is that the existing lab cannot create exceptional conditions such as lost messages or natural errors. The focus of the tutorial is the building of a tool that can be used to validate the new system’s ability to join the existing environment. The development of this special tool allows the building of a test laboratory to validate the existing system operation, the compliance of the new system with the requirements, and the ability of the new system to deal with exceptional conditions.

T2. How to Represent Causality and Guide Development of Social Analytics Measures
Dr. Dennis J. Folds, Lowell Scientific Enterprises, USA

Logic modeling is a technique used to represent causal relationships among social phenomena of interest. It is of particular interest in the context of program evaluation, in which there is a need to determine the effectiveness of some sort of intervention planned to cause change in those social phenomena. For example, a government agency might plan to provide a training and education program aimed at unemployed workers displaced by economic upheaval. The first step in program evaluation is to specify the logic model in terms of the constructs of interest and how they are causally related. Next the intervention is described in terms of the actions to be taken and the outputs those actions will generate. The heart of the logic model is to specify the causal relationships between those outputs and short-term, medium-term, and long-term changes in the social phenomena of interest. The logic model may be based on empirical research if available, on social and economic theories where applicable, and on rational conjecture. Once the logic model is developed, it is important to specify the measurement model that will guide development of measurement instruments and the timing of their application in data collection for program evaluation purposes. As a general rule, three or four observable indicators will be required for each construct in the model (unless there is a single perfect indicator for a construct.) The set of three or four observable indicators for a given construct collectively allow estimates of the values for the construct based on the correlations among those indicators. Some of these measures may require administration of surveys or access to existing data, such as from census records. It is useful if some of the measures can be obtained through social analytic techniques, such as sentiment analysis of social media content. Once the measurement model is specified and validated, assessments of program effectiveness can begin. In some cases it is possible to use structural equations modeling, confirmatory factor analysis, or similar statistical methods to test the measurement model and to evaluate program effectiveness. In many cases, however, the number of observations is too small for such measures to be appropriate. Alternate methods, such as case tracing and indicator synchrony must be used.