Journal of
Systemics, Cybernetics and Informatics
HOME   |   CURRENT ISSUE   |   PAST ISSUES   |   RELATED PUBLICATIONS   |   SEARCH     CONTACT US
 

 ISSN: 1690-4524 (Online)    DOI: 10.54808/JSCI



TABLE OF CONTENTS





A Light-weight Method for Trace Analysis to Support Fault Diagnosis in Concurrent Systems
Andrej Pietschker, Andreas Ulrich
Pages: 1-6
ABSTRACT:
This paper discusses a light-weight approach to the analysis of traces of partially ordered events collected during the execution of a concurrent or distributed system through the use of XML technology. Traces contain information about the creation and termination of threads or objects and the exchange of messages and other types of communication among them. Traces are transformed according to property patterns, visualised and analysed to support fault diagnosis of concurrent systems. We present an approach using XML technology and report the findings of an initial industrial project.


A Novel Methodology for Extracting Colon’s Lumen from Colonoscopic Images
Shunren Xia, Shankar M. Krishnan, Marta P. Tjoa, Peter M.Y. Goh
Pages: 7-12
ABSTRACT:
Recently, computer assisted diagnosis on colonoscopic images is getting more and more attention by many researchers in the world, while the colon’s lumen is the most important feature during the process. In this paper, a novel methodology for extracting colon’s lumen from colonoscopic image is presented. At first, in order to eliminate the background at the outside of colonoscopic images, an effective and easy method, which is similar to the Hough transform is used to detect the preliminary region of interest (pROI). Then the original image is segmented through two steps: relaxation process and tightening process. The relaxation process is realized by finding the all valleys from the histogram of a defined homogeneity function to produce as many homogenous regions as possible, while tightening process is subsequently employed to merge the unnecessary regions according to the color difference between them in CIE (L* a* b*) color space. After a series of postprocessing procedure, the lumen is successfully extracted. An extensive set of endoscopic images is tested to demonstrate the effectiveness of the proposed approach.


A qualitative and statistical analysis of students' perceptions inInternet learning
Edwin K. W. Cheung
Pages: 13-21
ABSTRACT:
As the global economy is increasingly becoming knowledge-based and knowledge-intensive, many experts and professionals predict that there will be a huge demand for educational products. Apparently, a major part of the demand is being met by the emergence of tens of thousands of electronic courses via the Internet provided by many education entities. Despite the huge number of these Internet courses, few researchers have addressed the students' perceptions or experiences in Internet learning. Therefore, a study as reported in this paper on the students' Internet learning experience is much needed. The results of this study have shown that both the students' competence in PC skills and their Internet surfing usages are significantly correlated with the students' usages of e-learning via the Internet. Additionally, the results have also shown that the e-learning usage is significantly correlated with the respondent' feelings of enjoyment for using the Internet learning materials. Alarmingly, the respondents agreed that Internet learning increased their workloads in studying. Practitioners in the relevant fields then can make use of these findings when developing their e-learning courses.


A Software for the Analysis of Scripted Dialogs Based on Surface Markers
Sylvain Delisle, Mathieu Dugré, Bernard Moulin
Pages: 22-27
ABSTRACT:
Most information systems that deal with natural language texts do not tolerate much deviation from their idealized and simplified model of language. Spoken dialog is notoriously ungrammatical however. Because the MAREDI project focuses in particular on the automatic analysis of scripted dialogs, we needed to develop a robust capacity to analyze transcribed spoken language. This paper presents the main elements of our approach, which is based on exploiting surface markers as the best route to the semantics of the conversation modelled. We highlight the foundations of our particular conversational model and give an overview of the MAREDI system. The latter consists of three key modules, which are 1) a connectionist network to recognise speech acts, 2) a robust syntactic parser, and 3) a semantic analyzer. These three modules are fully implemented in Prolog and C++ and have been packaged into an integrated software.


Comparison of Communication Models for Mobile Agents
Xining Li
Pages: 28-33
ABSTRACT:
An agent is a self-contained process being acting on behalf of a user. A Mobile Agent is an agent roaming the internet to access data and services, and carry out its assigned task remotely. This paper will focus on the communication models for Mobile Agents. Generally speaking, communication models concern with problems of how to name Mobile Agents, how to establish communication relationships, how to trace moving agents, and how to guarantee reliable communication. Some existing MA systems are purely based on RPC-style communication, whereas some adopts asynchronous message passing, or event registration/handling. Different communication concepts suitable for Mobile Agents are well discussed in [1]. However, we will investigate these concepts and existing models from a different point view: how to track down agents and deliver messages in a dynamic, changing world.


Constructing a User Interface for Cellular Phones Using Equipment and its Relations
Misayo Kitamura, Taizo Kojima
Pages: 34-39
ABSTRACT:
In a domain of SCADA (Supervisory Control And Data Acquisition) systems, it is necessary to obtain information about plants such as water plants in remote places using a cellular phone in order to ascertain plant status in case of emergency.T o utilize the small screen of a cellular phone and to eliminate the engineering cost of creating de.nition data to show plant status, a method of constructing user interface using equipment in the plant and its relations is proposed. In this method, some equipment is selected from all supervised equipment using the relations between the equipment, and then the content to be displayed is generated dynamically using the selected equipment. The equipment in plants is organized as a graph structure, which involves the equipment and the relations between the equipment.T he relations adopted in this method are both the physical connections between the equipment and the conceptual relationships.The result of the selection depends on the relations and their parameter values called the context dependent weight, which changes dynamically by viewpoints.


Estimation model of training support system for Finite Element Analysis and implementation of training scenario as knowledge patterns
Mitsuhiro Murayama, Yusuke Munakata, Junya Ikeda, Shigeru Nagasawa
Pages: 40-45
ABSTRACT:
This paper presents an arrangement method of FEA modeling knowledge by using a design pattern methodology, and deals with an estimation method of FEA training scenario. The prototype support system and its training knowledge can be arranged and classified in several patterns, while the mechanics of a beam structure has been picked up as an example of training subject. Essential evaluation problems were prepared for checking the synthetic achievement and the effectivity of training support system onto any beginners of the code called MARC/MENTAT has been investigated. Through the simulation with the prototype training support system, an evaluation model as the synthetic achievement test for any training scenario was shown with the degree vector. Observing the reusability of training programs, the compression rate of iterated common operations was estimated for the prototype training support system.


Optimal Scale Edge Detection Utilizing Noise within Images
Adnan Khashman
Pages: 46-50
ABSTRACT:
Edge detection techniques have common problems that include poor edge detection in low contrast images, speed of recognition and high computational cost. An efficient solution to the edge detection of objects in low to high contrast images is scale space analysis. However, this approach is time consuming and computationally expensive. These expenses can be marginally reduced if an optimal scale is found in scale space edge detection. This paper presents a new approach to detecting objects within images using noise within the images. The novel idea is based on selecting one optimal scale for the entire image at which scale space edge detection can be applied. The selection of an ideal scale is based on the hypothesis that "the optimal edge detection scale (ideal scale) depends on the noise within an image". This paper aims at providing the experimental evidence on the relationship between the optimal scale and the noise within images.


Towards the Accuracy of Cybernetic Strategy Planning Models: Causal Proof and Function Approximation
Christian A. Hillbrand
Pages: 51-57
ABSTRACT:
All kind of strategic tasks within an enterprise require a deep understanding of its critical key success factors and their interrelations as well as an in-depth analysis of relevant environmental influences. Due to the openness of the underlying system, there seems to be an indefinite number of unknown variables influencing strategic goals. Cybernetic or systemic planning techniques try to overcome this intricacy by modeling the most important cause-and-effect relations within such a system. Although it seems to be obvious that there are specific influences between business variables, it is mostly impossible to identify the functional dependencies underlying such relations. Hence simulation or evaluation techniques based on such hypothetically assumed models deliver inaccurate results or fail completely.
This paper addresses the need for accurate strategy planning models and proposes an approach to prove their cause-andeffect relations by empirical evidence. Based on this foundation an approach for the approximation of the underlying cause-andeffect function by the means of Artificial Neural Networks is developed.



Transforming UML ‘Collaborating’ Statecharts for Verification and Simulation
Patrick O. Bobbie, Yiming Ji, Lusheng Liang
Pages: 58-63
ABSTRACT:
Due to the increasing complexity of real world problems, it is costly and difficult to validate today’s software-intensive systems. The research reported in the paper describes our experiences in developing and applying a set of methodologies for specifying, verifying, and validating system temporal behavior expressed as UML statecharts. The methods combine such techniques/paradigms and technologies as UML, XMI, database, model checking, and simulation. The toolset we are developing accepts XMI input files as an intermediate metadata format. The metadata is then parsed and transformed into databases and related syntax-driven data structures. From the parsed data, we subsequently generate Promela code, which embodies the behavioral semantics and properties of the statechart elements. Compiling and executing Promela automatically invokes SPIN, the underlying temporal logic-based tool for checking the logical consistency of the statecharts’ interactions and properties. We validate and demonstrate our methodology by modeling and simulation using both ArgoUML and Rhapsody™ , respectively.


A Hybrid DWT-SVD Image-Coding System (HDWTSVD) for Color Images
Humberto Ochoa, K.R. Rao
Pages: 64-69
ABSTRACT:
In this paper, we propose the HDWTSVD system to encode color images. Before encoding, the color components (RGB) are transformed into YCbCr. Cb and Cr components are downsampled by a factor of two, both horizontally and vertically, before sending them through the encoder. A criterion based on the average standard deviation of 8x8 subblocks of the Y component is used to choose DWT or SVD for all the components. Standard test images are compressed based on the proposed algorithm.


Adaptive Image Restoration and Segmentation Method Using Different Neighborhood Sizes
Chengcheng Li, William J. B. Oldham
Pages: 70-75
ABSTRACT:
The image restoration methods based on the Bayesian’s framework and Markov random fields (MRF) have been widely used in the image-processing field. The basic idea of all these methods is to use calculus of variation and mathematical statistics to average or estimate a pixel value by the values of its neighbors. After applying this averaging process to the whole image a number of times, the noisy pixels, which are abnormal values, are filtered out. Based on the Tea-trade model, which states that the closer the neighbor, more contribution it makes, almost all of these methods use only the nearest four neighbors for calculation. In our previous research [1, 2], we extended the research on CLRS (image restoration and segmentation by using competitive learning) algorithm to enlarge the neighborhood size. The results showed that the longer neighborhood range could improve or worsen the restoration results. We also found that the autocorrelation coefficient was an important factor to determine the proper neighborhood size. We then further realized that the computational complexity increased dramatically along with the enlargement of the neighborhood size. This paper is to further the previous research and to discuss the tradeoff between the computational complexity and the restoration improvement by using longer neighborhood range. We used a couple of methods to construct the synthetic images with the exact correlation coefficients we want and to determine the corresponding neighborhood size. We constructed an image with a range of correlation coefficients by blending some synthetic images. Then an adaptive method to find the correlation coefficients of this image was constructed. We restored the image by applying different neighborhood CLRS algorithm to different parts of the image according to its correlation coefficient. Finally, we applied this adaptive method to some real-world images to get improved restoration results than by using single neighborhood size. This method can be extended virtually on all the methods based on MRF framework and result in improved algorithms.


Calibration of an Automatic System Using a Laser Signature
Edward F. Plinski, Antoni Izworski, Jerzy S. Witkowski
Pages: 76-80
ABSTRACT:
The specific phenomenon, which appears in tuned CO2 lasers, called a laser signature, is used as a standard for calibration of the servomechanism. The proposed servomechanism can be used for continuous investigations of the laser signatures of different laser media.


Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images
Y. M. Harry Ng, C. P. Kwong
Pages: 81-87
ABSTRACT:
Modern endoscopes offer physicians a wide-angle field of view (FOV) for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.


Fault Tolerant Analysis For Holonic Manufacturing Systems Based On Collaborative Petri Nets
Fu-Shiung Hsieh
Pages: 88-95
ABSTRACT:
Uncertainties are significant characteristics of today’s manufacturing systems. Holonic manufacturing systems are new paradigms to handle uncertainties and changes in manufacturing environments. Among many sources of uncertainties, failure prone machines are one of the most important ones. This paper focuses on handling machine failures in holonic manufacturing systems. Machine failure will reduce the number of available resources. Feasibility analysis need to be conducted to check whether the works in process can be completed. To facilitate feasibility analysis, we characterize feasible conditions for systems with failure prone machines. This paper combines the flexibility and robustness of multi-agent theory with the modeling and analytical power of Petri net to adaptively synthesize Petri net agents to control holonic manufacturing systems. The main results include: (1) a collaborative Petri net (CPN) agent model for holonic manufacturing systems, (2) a feasible condition to test whether a certain type of machine failures are allowed based on collaborative Petri net agents and (3) fault tolerant analysis of the proposed method.


Visual Servoing of a Conventional CNC Machine Using an External Axis Controller
Daniel Hanafi, Guy Rodnay, Michal Tordon, Jayantha Katupitiya
Pages: 96-101
ABSTRACT:
This paper presents the implementation of an external axis control system on a conventional CNC machine so that the machine can be actively controlled in response to sensors such as vision and force. The controller that runs on an external computer has direct access to the CNC controller for machine position sensing. The control signals to the machine are sent through purpose built circuitry via the machine’s manual pulse generator (MPG) inputs. To demonstrate the accuracy and performance of the control system, it was used to visually track the profile of a mandrel used for shear spinning. The implemented system eliminates the parallax error and the need to use an accurate pixel resolution. The raw tracking data is processed by a curvature detection algorithm that detects linear and circular segments and segment transitions. The results show that the visual tracking system provides accurate tracking results that are well within the tolerances used in the industry.


Developing an Internet Oriented Platform for Earthquake Engineering Application and Web-based Virtual Reality Simulation System for Seismic hazards: Towards Disaster Mitigation in Metropolises
Ali Alaghehbandian, Ping Zhu, Masato Abe, Junji Kiyono
Pages: 102-107
ABSTRACT:
This paper reviews the state of the art on risk communication to the public, with an emphasis on simulation of seismic hazards using VRML. Rapid growth computer technologies, especially the Internet provide human beings new measures to deal with engineering and social problems which were hard to solve in traditional ways. This paper presents a prototype of an application platform based on the Internet using VR (Virtual Reality) for civil engineering considering building an information system of risk communication for seismic hazards and at the moment in the case of bridge structure.