JWE Abstracts 

Vol.16 No.3&4 June 1, 2017

New Advances in Adaptability and Rapid Evolution of Technology in the Development of Web Information Systems

Editorial (pp181-182)
       
Francisco J. Domínguez-Mayo, Julián A. García-García, and Laura García Borgoñón

Towards Fast Metamodel Evolution in LIQUIDML (pp183-211)
       
Esteban Robles Luna, Gustavo Rossi, José Matías Rivero, Francisco J. Domínguez-Mayo, Julián A. García-García,
       
and María J. Escalona
The software industry is applying Model-driven development approaches due to a core set of benefits, such as raising the level of abstraction and reducing coding errors. However, their underlying modeling languages tend to be quite static, making their evolution hard, specifically when the corresponding metamodel does not support primitives and/or functionalities required in specific business domains. This paper presents an extension to the LiquidML language to support fast metamodel evolution by allowing experts to abstract new language concepts from primitives while supporting automatic tool evolution and zero application downtime. To probe our claims, we evaluate the evolutionary capabilities of existing modeling languages and LiquidML in a real world language extension.

Applying a Model-Based Methodology to Develop Web-Based Systems of Systems (pp212-227)
       
M.A. Barcelona, Laura García Borgoñón, G. López-Nicolás, I. Ramos, and María J. Escalona
Systems of Systems (SoS) are emerging applications composed by subsystems that interacts in a distributed and heterogeneous environment. Web-based technologies are a current trend to achieve SoS user interaction. Model Driven Web Engineering (MDWE) is the application of Model Driven Engineering (MDE) into the Web development domain. This paper presents a MDWE methodology to include Web-based interaction into SoS development. It's composed of ten models and seven model transformations and it's fully implemented in a support tool for its usage in practice. Quality aspects covered through the traceability from the requirements to the final code are exposed. The feasibility of the approach is validated by its application into a real-world project. A preliminary analysis of potential benefits (reduction of effort, time, cost; improve of quality; design vs code ratio, etc) is done by comparison to other project as an initial hypothesis for a future planned experimentation research.

Identifying Functional Requirements Inconsistencies in Multi-Terms Projects Framed into a Model-Based Methodolog (pp228-251)
       
Julián A. García-García, M. Urbieta, María J. Escalona, Gustavo Rossi, and J.G. Enríquez

REF (Requirements Engineering Process) is one of the most essential processes within the software project life cycle because it allows describing software product requirements. This specification should be as consistent as possible to enable estimating in a suitable manner the effort required to obtain the final product. REP is complex in itself, but this complexity is greatly increased in big, distributed and heterogeneous projects with multiple analyst teams and high integration among functional modules. This paper presents an approach for the systematic conciliation of functional requirements in big projects dealing with a model-based approach. It also explains how this approach may be implemented in the context of NDT (Navigational Development Techniques) methodology and finally, it describes a preliminary evaluation of our proposal in CALIPSOneo project by analyzing the improvements obtained with our approach.

Other Research Articles

 An Approach of Web Service Organization Using Bayesian Network Learning (pp252-276)
       
J.X. Liu and Z.H. Xia
How to organize and manage Web services, and help users to select the atomic and a set of services with correlations to meet their functional and non-functional requirements quickly is a key problem to be solved in the era of services computing. Firstly, it uses the three-stage dependency Bayesian network structure learning method to organize service clusters which realize different functions. Then it uses the maximum likelihood estimation and Bayesian estimation methods to do the parameter learning, and the conditional probability table (CPT) of all the nodes can be got. This method can help users select a set of services with better function in the organized services quickly and accurately. Finally, the effectiveness of the proposed method is validated through experiments and case study.

A Complete Privacy Preservation System for Data Mining Using Function Approximation (pp277-292)
       
V. Rajalakshmi, M. Lakshmi, and V. Maria Anu
Data privacy has become the primary concern in the current scenario as there are many pioneering methods for efficient mining of data. There are many algorithms to preserve privacy and handle the trade-off between privacy and utility. The ultimate goal of these algorithms is to anonymize the data without reducing the utility of them. A  Privacy preserving procedure should have a minimum execution time, which is the overhead of clustering algorithms implemented using classical methods.  There is also no single procedure that completely handles the trade-off and also updates itself automatically. In this work, the anonymization is implemented using Radial Basis Function [RBF] network, which provides both maximum privacy and utility with a proper tuning parameter specified between privacy and utility. The network also updates itself when the trend of data changes by controlling the maximum amount of error with a threshold value.

Prediction of Defect Density for Open Source Software using Repository Metrics (pp293-310)
       
Dinesh Verma and Shishir Kumar
Open source software refers to software with unrestricted access for use or modification. Many software development organizations are using this open source methodology in their development process. Many software developers can work in parallel with the open source project using the web as a shared resource.  The defect density of such projects is often required to be predicted for the purpose to ensure quality standards. Static metrics for defect density prediction require extraction of abstract information from the code. Repository metrics, on the other hand, are easy to extract from the repository data sets. In this paper, an analysis has been performed over repository metrics of open source software. Further, defect density is being predicted using these metrics individually and jointly. Sixty two open source software are considered for analysis using Simple and Multiple Linear Regression methods as statistical procedures. The results reveal a statistically significant level of acceptance for prediction of defect density using few repository metrics individually and jointly.

Load-Time Reduction Techniques for Device-Agnostic Web Sites (pp311-346)
       
Eivind Mjelde and Andreas L. Opdahl
Modern \emph{device-agnostic web sites} aim to offer web pages that adapt themselves seamlessly to the front-end equipment they are displayed on, whether it is a desktop computer, a mobile device, or another type of equipment. At the same time, mobile devices and other front-end equipment with limited processing powers, screen resolutions, and network capacities have become common, making \emph{front-end performance optimisation} in general, and \emph{load-time reduction} in particular, a central concern. The importance of load-time reduction is exacerbated by the proliferation of multimedia content on the web. This paper therefore reviews, evaluates, and compares available load-time reduction techniques for device-agnostic web sites, grouped into techniques that improve client-server communication, optimise UI graphics, optimise textual resources, and adapt content images to context. We evaluate the techniques on a case web site using both desktop and mobile front-ends, in a variety of settings, and over both HTTP/1.1 and HTTP/2. We show that each technique has its pros and cons, and that many of them are likely to remain useful even as HTTP/2 becomes widespread. Most techniques were clearly beneficial under at least one of the conditions we evaluated, but most of them were also detrimental in certain cases --- sometimes drastically so.Hence, load-time reduction techniques for device-agnostic web sites must be selected with care, based on a solid understanding both of usage context and of the trade offs between the techniques.

Back to JWE Online Front Page