|A Novel Method of Estimating Statistically Matched Wavelet: Part 1-Compactly Supported Wavelet|
Anubha Gupta, ShivDutt Joshi, Surendra Prasad
Issue of finding a wavelet matched to signal has been addressed by various researchers in past. This paper presents a new method of estimating wavelet that is matched to a given signal in the statistical sense. The key idea lies in the estimation of analysis wavelet filter from a given signal and is similar to a sharpening filter used in image enhancement. The output of analysis wavelet filter branch after decimation is written in terms of filter weights and input signal samples. It is then viewed to be equivalent to difference of middle sample and its smoother estimate from the neighborhood which then needs to be minimized. To achieve this, minimum mean square error (MMSE) criterion is employed using the autocorrelation function of input signal. Since wavelet expansion acts like Karhunen-Loève type expansion for generalized 1/f processes, it is assumed that the given signal is a sample function of an nth order fractional Brownian motion. Its autocorrelation function is used with MMSE criterion to estimate analysis wavelet filter. Next, a method is proposed to design 2-band FIR perfect reconstruction biorthogonal filter bank. This result in compactly supported wavelet matched statistically to given signal. Further, it is shown that compactly supported wavelet with desired support can be designed from a given signal. The theory is supported with number of simulation examples.
An Experiment of Evaluating Software Understandability
Shinji UCHIDA, Kazuyuki SHIMA
Software understandability is one of important characteristics of software quality because it can influence cost or reliability at software evolution in reuse or maintenance. But it is difficult to evaluate software understandability because understanding is an internal process of humans. So, we propose “software overhaul” as a method for externalizing process of understanding software systems and propose a probability model for evaluating software understandability based on it. This paper presented the experiment of evaluating software understandability using a probabilistic model.
Automating XML Markup using Machine Learning Techniques
Shazia Akhtar, Ronan Reilly, John Dunnion
In this paper we present a novel system for automatically marking up text documents into XML. The system uses the techniques of the Self-Organising Map (SOM) algorithm in conjunction with an inductive learning algorithm, C5.0. The SOM algorithm clusters the XML marked-up documents on a two-dimensional map such that documents having similar content are placed close to each other. The C5.0 algorithm learns and applies markup rules derived from the nearest SOM neighbours of an unmarked document. The system is designed to be adaptive so that it learns from errors in order to improve the markup of resulting document. Experiments shows that our system provides high accuracy and demonstrate that our approach is practical and feasible.
Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach
Peng LI, Zhonghang XIA, I-Ling YEN
Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction.
In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.
Performance Analysis of Information Services in a Grid Environment
Giovanni Aloisio, Massimo Cafaro, Sandro Fiore, Italo Epicoco, Maria Mirto, Silvia Mocavero
The Information Service is a fundamental component in a grid environment. It has to meet a lot of requirements such as access to static and dynamic information related to grid resources, efficient and secure access to dynamic data, decentralized maintenance, fault tolerance etc., in order to achieve better performance, scalability, security and extensibility. Currently there are two different major approaches. One is based on a directory infrastructure and another one on a novel approach that exploits a relational DBMS. In this paper we present a performance comparison analysis between Grid Resource Information Service (GRIS) and Local Dynamic Grid Catalog relational information service (LDGC), providing also information about two projects (iGrid and Grid Relational Catalog) in the grid data management area.
Automatic Feature Interaction Analysis in PacoSuite
Wim Vanderperren, Davy Suvée, Bart Verheecke, María Agustina Cibrán Cibrán, Viviane Jonckers
In this paper, we build upon previous work that aims at recuperating aspect oriented ideas into component based software development. In that research, a composition adapter was proposed in order to capture crosscutting concerns in the PacoSuite component based methodology. A composition adapter is visually applied onto a given component composition and the changes it describes are automatically applied. Stacking multiple composition adapters onto the same component composition can however lead to unpredictable and undesired side-effects. In this paper, we propose a solution for this issue, widely known as the feature interaction problem. We present a classification of different interaction levels among composition adapters and the algorithms required to verify them. The proposed algorithms are however of exponential nature and depend on both the composition adapters and the component composition as a whole. In order to enhance the performance of our feature interaction analysis, we present a set of theorems that define the interaction levels solely in terms of the properties of the composition adapters themselves.
Fraction-Integer Mehod (FIM) for Calculating Multiplicative Inverse
Multiplicative inverse is a crucial operation in public key cryptography. Public key cryptography has given rise to such a need, in which we need to generate a related public/private pair of numbers, each of which is the inverse of the other. One of the best methods for calculating the multiplicative inverse is Extended-Euclidean method. In this paper we will propose a new algorithm for calculating the inverse, based on continuous adding of two fraction numbers until an integer is obtained.
Using Ontology to Drive an Adaptive Learning Interface
Andrew Crapo, Amy Aragones, Joseph Price, Anil Varma
Intelligent, adaptive interfaces are a pre-requisite to elevating computer-based applications to the realm of collaborative decision support in complex, relatively open-ended domains such as logistics and planning. This is because the composition and effective presentation of even the most useful information must be tailored to constantly changing circumstances. Our objective is to not only achieve an adaptive human-machine interface, but to imbue the software with a significant portion of the responsibility for effectively controlling the adaptation, freeing the user from unnecessary distraction and making the human-machine relationship more collaborative in nature. The foundational concepts of interface adaptation are discussed and a specific logistics application is described as an example.
|Clinical Assessment of Cardiovascular and Autonomic Function|
Diego Benitez, Patrick Gaydecki, Amir Zaidi
This paper presents a non-invasive virtual medical instrument for the clinical assessment of cardiovascular and autonomic function. The virtual instrument was developed with the aim of analysing and understanding the physiological changes that occurs in the heart and circulation during vasovagal blackout attacks. The automated virtual instrument allows impedance cardiography analysis, time and frequency heart rate and blood pressure variability analysis, invasive and non-invasive baroreflex sensitivity assessment and forearm blood flow measurements. Using this virtual instrument five control subjects (3 male, mean age 30.6 ± 5.4) and five vasovagal syncope suffers (2 male, mean age 38.6 ± 6.3) were used in a study to try to identify the differences between the two groups to tilt induced syncope. The results obtained suggest that there are fundamental differences in the physiological responses to orthostatic stress between vasovagal patients and controls, which are evident before the onset of major haemodynamic changes.
Collocational Networks Supporting Strategic Planning of Brand Communication: Analysis of Quarterly Reports of Telecommunication Companies
Pentti Järvi, Hannu Vanharanta
This study addresses analysing quarterly reports from a brandtheoretical viewpoint. The study addresses the issue through a method which introduces both a quantitative tool based on linguistic theory and qualitative decisions of the researchers. The research objects of this study are two quarterly reports each of three telecommunications companies: Ericsson, Motorola and Nokia. The method used is a collocational network. The analyses show that there are differences in communication and message strategies among investigated companies and also changes during a quite short period in each company
Robust Color Choice for Small-size League RoboCup Competition
Qiang Zhou, Limin Ma, David Chelberg, David Parrott
In this paper, the problem of choosing a set of most
separable colors in a given environment is discussed. The proposed
method models the process of generating theoretically computed
best colors, printing of these colors through a color printer, and
imaging the printed colors through a camera into an integrated
framework. Thus, it provides a feasible way to generate
practically best separable colors for a given environment with a
set of given equipment. A real world application (robust color
choice for small-size league RoboCup competition) is used as an
example to illustrate the proposed method. Experimental results on
this example show the competitiveness of the colors learned from
our algorithm compared to the colors adopted by other teams which
chosen via an extensive trial and error process using standard color papers.
A case Study of Work-Integrated Learning
An evaluation study of work-integrated e-learning in the county administration of Sweden is reported and discussed. A web-based prototype (Diabas) for in-house education dealing with the official registers, was developed and tested. The MOA-L model was used as a frame of reference for the evaluation study. In the model especially the consequences for the work situation, the work process and the quality of the service to the client was focussed. The present situation as well as the situation after the learners had passed the course was studied and compared. The initial analysis of the present situation regarding these aspects was seen as very important for the development process of the course. Similarities between the development of work-integrated learning courses and traditional system development work was analysed and discussed. Cultural aspects and management policies were seen as very important in order to motivate the learners to attend the course. The learners were on the whole satisfied with the pilot course. The flexible forms for the course was seen as important. The work situation and the work flow need to be adapted in order to facilitate the use of the new knowledge after the course.
Efficient Radio Resource Allocation in a GSM and GPRS Cellular Network
David Vannucci, Peter Chitamu
This paper investigates the effect of various radio resource allocation strategies in a GSM/GPRS cellular network. The most efficient resource allocation is analysed as a function of the proportion of circuit switched voice and packet switched data load. The Grade of Service and average packet delay is investigated as a function of the load, packet size and call duration. Additionally, the feasibility of using voice over Internet Protocol as opposed to circuit switched voice is investigated as a means to increase subscriber capacity per base station. The work is motivated firstly by the complexity of having both circuit switched and packet switched connectivity on GSM/GPRS mobile cellular system and secondly that an exclusively packet based access on GSM/GPRS has the potential to increase the efficiency of resource utilisation by suitably varying the channel allocation to exploit the characteristics of voice and data traffic.
Towards Multimodal Error Management:Experimental Evaluation of User Strategies in Event of Faulty Application Behavior in Automotive Environments
Gregor McGlaun, Frank Althoff, Manfred Lang, Gerhard Rigoll
In this work, we present the results of a study
analyzing the reactions of subjects on simulated
errors of a dedicated in-car interface for
controlling infotainment and communication
services. The test persons could operate the
system, using different input modalities, such
as natural or command speech as well as head and
hand gestures, or classical tactile paradigms.
In various situational contexts, we scrutinized
the interaction patterns the test participants
applied to overcome different operation tasks.
Moreover, we evaluated individual user behavior
concerning modality transitions and individual
fallback strategies in case of system errors.
Two different error types (Hidden System Errors
and Apparent System Errors) were provoked. As a
result, we found out that initially, i.e. with
the system working properly, most users prefer
tactile or speech interaction. In case of Hidden
System Errors, mostly changes from speech to
tactile interaction and vice versa occurred.
Concerning Apparent System Errors, 87% of the
subjects automatically interrupted or cancelled
their input procedure. 73% of all test persons
who continued interaction, when the reason for
the faulty system behavior was gone, strictly
kept the selected modality. Regarding the given
input vocabulary, none of the subjects selected
head or hand gesture input as the leading
Generic Simulator Environment for Realistic Simulation - Autonomous Entity Proof and Emotion in Decision Making
Mickaël Camus, Nabil El Kadhi
Simulation is usually used as an evaluation and testing system. Many sectors are concerned such as EUROPEAN SPACE AGENCY or the EUROPEAN DEFENCE. It is important to make sure that the project is error-free in order to continue it. The difficulty is to develop a realistic environment for the simulation and the execution of a scenario. This paper presents PALOMA, a Generic Simulator Environment. This project is based essantially on the Chaos Theory and Complex Systems to create and direct an environment for a simulation. An important point is the generic aspect. PALOMA will be able to create an environment for different sectors (Aero-space, Biology, Mathematic, ...). PALOMA includes six components : the Simulation Engine, the Direction Module, the Environment Generator, the Natural Behavior Restriction, the Communication API and the User API. Three languages are used to develop this simulator. SCHEME for the Direction language, C/C++ for the development of modules and OZ/MOZART for the heart of PALOMA.
Information Technologies for Civilian Bioterrorism Response
Paul Y. Oh, Ruifeng Zhang, Charles Mode, Sherri Jurgens
To improve the level of preparedness against potential bioterrorist incidents, civilian medical communities in the United States have much to do. Developing effective responses hinge on information technologies namely detection, isolation, communications and education. This paper describes our efforts in integrating these technologies at the National Bioterrorism Civilian Medical Response Center (CiMeRC) at Drexel University. Our particular focus involves scenarios where biochemical agents are released in the public transportation system of a major metropolitan city.