JWE Abstracts 

Vol.11 No.4 December 1, 2012

Survey: 

Ontology Learning: Revisited (pp0269-289)
       
Ahmad Abdollahzadeh Barforoush and Ali Rahnama
The term "ontology" comes from the field of philosophy that is concerned with the study of being or existence. In general computer science defines ontology as an "explicit specification of a conceptualization," which is, "the objects, concepts, and other entities that are presumed to exist in some area of interest and the relationships that hold among them". Ontologies hold a great importance to modern knowledge based systems. They enable shared knowledge and reuse where information resources can be communicated between human or software agentsand should be machine readable. Manual construction of ontologies is an expensive and time consuming task. An answer to this problem is to provide an automatic or semi- automatic tool for ontology construction. Over the past years, this field of research has not yet reached the goal of fully automating the ontology development process. In this paper we will review the ontology creation process with the help of ontology learning (OL) and extend our previous OL framework. We will examine OL applications with respect to the extensions of our framework. And last we will define a roadmap for future work.

Research Articles: 

WebFDM: a Situational Method for the Development of Web Applications (pp290-316)
       
Adelaide Bianchini, Ascander Suarez, and Carlos A. Perez
Several methodologies have been proposed to improve the quality of Web application development in the last decade. Some proposals provide techniques and mechanisms to specify the product model; others are focused on process development models. However, few approaches have suggested methods adapted to different situations and development circumstances. Besides, some industrial and academic methods are not flexible enough to react according to the different situations and projects conditions to be developed. These conditions may include application type and complexity, models to be created, development team characteristics, technological resources among others. This paper presents WebFDM, a method grounded on Situational Method Engineering principles for the development of web applications and a CASE Tool – Cohesión. The Kanon framework, used to characterize Web development situations, is also described.

The Design of E-Speranto – A Computer Language for Recording Multilingual Texts on the Web (pp317-336)
       
Grega Jakus, Jaka Sodnik, and Saso Tomazic
The present paper describes the design of E-speranto, a formal computer language for recording multilingual texts on the Web. The vocabulary and grammar of E-speranto are based on the international auxiliary language Esperanto, while its syntax is based on XML (eXtensible Markup Language). The latter is one of the key features of E-speranto, as it enables a natural integration of E-speranto documents into web pages. When a user visits such a web page, its content is interpreted and displayed in the user’s preferred language. Due to the fact that E-speranto is a formal language, it is much easier for computers to comprehend documents created in this language than to comprehend texts written in natural languages. The documents in E-speranto can be created directly with the aid of tools designed especially for this purpose. For a practical application of E-speranto, each linguistic group merely needs to develop the interpreter of E-speranto for their own language. We designed a proof-of-concept implementation of the multilingual Web based on E-speranto. The testing confirmed the applicability of the concept and indicated the guidelines for further development.

Reuse of JIT Compiled Code in JavaScript Engine (pp337-349)
       
Sanghoon Jeon and Jaeyoung Choi
JavaScript is a core language of web applications. As the most frequently used web language, it is used in more than 90% of web pages around the world. As a result, the performance of JavaScript engines becomes an important issue. In order to increase the execution speed of web applications, many JavaScript engines are embedded in JIT (Just-in-time) Compiler. However, JIT compilers are required to execute and compile applications at the same time. Therefore, this technique has been hardly applicable to embedded systems, in which system resources are limited. In this paper, we present a reusing technique of JIT compiled code in the JavaScript engine to reduce compilation overhead. In order to reuse JIT compiled code, problems for runtime dependency in JIT compiled binary code must be resolved. We used a direct binary code patching method on Squirrel FisheXtreme (SFX) JavaScript engine of WebKit for the experiment. Through the experiment, we showed that the total compilation time of the modified SFX JavaScript engine was slightly increased up to 9.4% by saving codes, but the time was reduced up to 49%, averagely 44%, depending on web services when the code was reused.

Popularity-Based Relevance Propagation (pp350-364)
       
Ehsan Mousakazemi, Mehdi Agha Sarram, and Ali Mohammad Zareh Bidoki
It is evident that information resources on the World Wide Web (WWW) are growing rapidly with unpredictable rate. Under these circumstances, web search engines help users to find useful information. Ranking the retrieved results is the main challenge of every search engine. There are some ranking algorithms based on content and connectivity such as BM25 and PageRank. Due to low precision of these algorithms for ranking on the web, combinational algorithms have been proposed. Recently, relevance propagation methods as one of the salient combinational algorithms, has attracted many information retrieval (IR) researchers' attention. In these methods the content-based attributes are propagated from one page to another through web graph. In this paper, we propose a generic method for exploiting the estimated popularity degree of pages (such as their PageRank score) to improve the propagation process. Experimental results based on TREC 2003 and 2004 gathered in Microsoft LETOR 3.0 benchmark collection, show that this idea can improve the precision of the corresponding models without any additional online complexity.

Back to JWE Online Front Page