|A Hybrid Data Compression Scheme for Improved VNC|
Xiaozheng (Jane) Zhang, Hirofumi Takahashi
Virtual Network Computing (VNC) has emerged as a promising technology in distributed computing environment since its invention in the late nineties. Successful application of VNC requires rapid data transfer from one machine to another over a TCP/IP network connection. However transfer of screen data consumes much network bandwidth and current data encoding schemes for VNC are far from being ideal. This paper seeks to improve screen data compression techniques to enable VNC over slow connections and present a reasonable speed and image quality.
In this paper, a hybrid technique is proposed for improving coding efficiency. The algorithm first divides a screen image into pre-defined regions and applies encoding schemes to each area according to the region characteristics. Second, correlation of screen data in consecutive frames is exploited where multiple occurrences of similar image contents are detected. The improved results are demonstrated in a dynamic environment with various screen image types and desktop manipulation.
A Method to Generate Freeform Curves from a Hand-drawn Sketch
Tetsuzo Kuragano, Akira Yamaguchi
When designers begin a product design, they create their ideas and expand them. This process is performed on paper, and designers’ hand-drawn lines are called sketches. If the designers’ hand-drawn sketches can be realized as real curves, it would be effective in shortening the design period. We have developed a method to extract five degree Bezier curves based on a hand-drawn sketch. The basic techniques to detect curves based on a hand-drawn sketch are described. First, light intensity transformation, binarization of the hand-drawn sketch, and feature based erosion and dilation to smooth the edges of the binary sketch image are described. Then, line segment determination using the detected edges is described. Using the determined line segments a five degree Bezier curve generation is described. A curve shape modification algorithm is described to reconstruct a five degree Bezier curve. Examples of five degree fair curvature Bezier curves based on a sketch are given.
A Non-standard Anisotropic Diffusion for Speckle Noise Removal
Hyeona Lim, Thomas Neil Williams
The main objective of this article is to develop a non-standard partial differential equation-based anisotropic diffusion model for efficient edge-preserving denoising for speckle noised images. The standard total variation (TV)-based energy functional is not based on the
multiplicative-ness of speckle noise which is inappropriate for a speckle noise removal. Moreover, TV-based models can easily lose fine structures and produce non-physical dissipation during the noise removal process. The principal feature in this article is an introduction of a new coefficient for the non-linear diffusion term of the Euler-Lagrange equation corresponding to the minimization of the energy functional. Combination of a new model with a texture-free residual parametrization enables us to overcome the drawback arising from use of the standard TV-based model. The numerical results indicate the effectiveness and robustness of the new model.
A User Authentication Based on Personal History- A User Authentication System Using E-mail History -
Masakatsu Nishigaki, Makoto Koike
This paper proposes a user authentication using personal history of each user. Here, authentication is done by giving answers to questions about the history of user’s daily life. Users do not have to memorize any password, since the passwords are what users already know by experience. In addition, everyday-life experience increases day by day, and thus the question could change on every authentication trial. In this paper, a user authentication system using user’s e-mail history is shown as a prototype of our proposal, and some basic experiments to evaluate the availability of the system are carried out.
A Visual Cryptography Based Watermark Technology for Individual and Group Images
Azzam Sleit, Adel Abusitta
The ease by which digital information can be duplicated and distributed has led to the need for effective copyright protection tools. Various techniques including watermarking have been introduced in attempt to address these growing concerns. Most watermarking algorithms call for a piece of information to be hidden directly in media content, in such a way that it is imperceptible to a human observer, but detectable by a computer. This paper presents an improved cryptographic watermark method based on Hwang and Naor-Shamir [1, 2] approaches. The technique does not require that the watermark pattern to be embedded in to the original digital image. Verification information is generated and used to validate the ownership of the image or a group of images. The watermark pattern can be any bitmap image. Experimental results show that the proposed method can recover the watermark pattern from the marked image (or group of images) even if major changes are reflected on the original digital image or any member of the image group such as rotation, scaling and distortion.
Aggregation of Composition States for Markov Estimation in Level 2 Fusion
Stephen Stubberud, Kathleen Kramer
In sensor fusion, the use of composition information can help define and understand relationships between targets. This process, part of the Situational Assessment problem, also referred to as Level 2 fusion, can be quite complex when using standard classification approaches such as the Bayesian taxonomy. Determination of the number and type of elements that comprise a group can vary from report to report based on the type of sensors, the environment, and the behavior of the group. Estimation of group composition that can take these factors into account has been developed using a Markov chain approach. If the number of potential target classes is significant and the various standard group compositions are numerous, the computational complexity becomes unmanageable. This effort investigates a useful and computationally attainable Level 2 composition state estimate based upon the use of state aggregation.
An Algorithm for Filtering Electrocardiograms to Improve Nonlinear Feature Extraction
Mohammad Bahmanyar, Wamadeva Balachandran
This paper introduces an algorithm for removing high frequency noise components from electrocardiograms (ECGs) based on Savitzky-Golay finite duration impulse response (FIR) smoothing filter. The peaks of R waves and the points at which Q waves end and S waves start are detected for all beats. These points are used to separate the low amplitude parts of the ECG in each beat, which are most affected by high frequency noise. The Savitzky-Golay smoothing algorithm is then applied to these parts of the ECG and the resultant filtered signals are added back to their corresponding QRS parts. The effect of high frequency noise removal on nonlinear features such as largest Lyapunov exponent and minimum embedding dimension is also investigated. Performance of the filter has been compared with an equiripple low pass filter and wavelet de-noising.
Dynamic Mobile RobotNavigation Using Potential Field Based Immune Network
Guan-Chun Luh, Wei-Wen Liu
This paper proposes a potential filed immune network (PFIN) for dynamic navigation of mobile robots in an unknown environment with moving obstacles and fixed/moving targets. The Velocity Obstacle method is utilized to determine imminent obstacle collision of a robot moving in the time-varying environment. The response of the overall immune network is derived by the aid of fuzzy system. Simulation results are presented to verify the effectiveness of the proposed methodology in unknown environments with single and multiple moving obstacles
|Implementation of Hierarchical Authorization For A Web-Based Digital Library|
Andreas Geyer-Schulz, Anke Thede
Access control mechanisms are needed in almost every system nowadays to control
what kind of access each user has to which resources and when. On the one hand
access control systems need to be flexible to allow the definition of the access rules
that are actually needed. But they must also be easy to administrate to prevent
rules from being in place without the administrator realizing it. This is
particularly difficult for systems such as a digital
library that requires fine-grained access rules specifying access control at a
document level. We present the implementation and architecture of a system that
allows definition of access rights down to the single document and user
level. We use hierarchies on users and roles, hierachies on access rights and
hierarchies on documents and document groups. These hierarchies allow a maximum
of flexibility and still keep the system easy enough to administrate. Our access
control system supports positive as well as negative permissions.
Monte Carlo Simulation of an American Option
We implement gradient estimation techniques for sensitivity analysis
of option pricing which can be efficiently employed in Monte Carlo
simulation. Using these techniques we can simultaneously obtain an
estimate of the option value together with the estimates of
sensitivities of the option value to various parameters of the model.
After deriving the gradient estimates we incorporate
them in an iterative stochastic approximation algorithm for pricing an
option with early exercise features. We illustrate the procedure using
an example of an American call option with a single dividend that is
analytically tractable. In particular we incorporate estimates for the
gradient with respect to the early exercise threshold level.
Neural Network for Principal Component Analysis with Applications in Image Compression
Luminita State, Catalina Lucia Cocianu, Vlamos Panayiotis
Classical feature extraction and data projection methods have been extensively investigated in the pattern recognition and exploratory data analysis literature. Feature extraction and multivariate data projection allow avoiding the “curse of dimensionality”, improve the generalization ability of classifiers and significantly reduce the computational requirements of pattern classifiers. During the past decade a large number of artificial neural networks and learning algorithms have been proposed for solving feature extraction problems, most of them being adaptive in nature and well-suited for many real environments where adaptive approach is required. Principal Component Analysis, also called Karhunen-Loeve transform is a well-known statistical method for feature extraction, data compression and multivariate data projection and so far it has been broadly used in a large series of signal and image processing, pattern recognition and data analysis applications.
Refactoring Information Systems- A Formal Framework -
Michael Löwe, Harald König, Michael Peters, Christoph Schulz
We introduce a formal framework for the refactorization of complete information systems, i. e. the data model and the data itself. Within this framework model transformations are uniquely extended to the data level and result in data migrations that protects the information contained in the data. The framework is described using general and abstract notions of category theory. Two concrete instances of this framework show the applicability of the abstract concept to concrete object models. In the first instance, we only handle addition, renaming and removal of model objects. The second instance can also handle folding and unfolding within object compositions. Finally, we discuss how an instance of the framework should look like that is able to handle inheritance structures as well.
Using Genetic Algorithm for Eye Detection and Tracking in Video Sequence
Takuya Akashi, Yuji Wakasa, Kanya Tanaka, Stephen Karungaru, Minoru Fukumi
We propose a high-speed size and orientation invariant eye tracking method, which can acquire numerical parameters to represent the size and orientation of the eye. In this paper, we discuss that high tolerance in human head movement and real-time processing that are needed for many applications, such as eye gaze tracking. The generality of the method is also important. We use template matching with genetic algorithm, in order to overcome these problems. A high speed and accuracy tracking scheme using Evolutionary Video Processing for eye detection and tracking is proposed. Usually, a genetic algorithm is unsuitable for a real-time processing, however, we achieved real-time processing. The generality of this proposed method is provided by the artificial iris template used. In our simulations, an eye tracking accuracy is 97.9% and, an average processing time of 28 milliseconds per frame.
Universal Robot Hand Equipped with Tactile and Joint Torque Sensors: Development and Experiments on Stiffness Control and Object Recognition
Hiroyuki NAKAMOTO, Futoshi KOBAYASHI, Nobuaki IMAMURA, Hidenori SHIRASAWA, Fumio KOJIMA
Various humanoid robots have been developed and multifunction robot hands which are able to attach those robots like human hand is needed.
But a useful robot hand has not been depeveloped, because there are a lot of problems such as control method of many degrees of freedom and processing method of enormous sensor outputs. Realizing such robot hand, we have developed five-finger robot hand. In this paper, the detailed structure of developed robot hand is described. The robot hand we developed has five fingers of multi-joint that is equipped with joint torque sensors and tactile sensors. We report experimental results of a stiffness control with the developed robot hand. Those results show that it is possible to change the stiffness of joints.
Moreover we propose an object recognition method with the tactile sensor. The validity of that method is assured by experimental results.
Unveiling the Domain Conflict – FOSS vs. IPR
Dr. Anurika Vaish, Dr. Shveta Singh, Mr. Abhishek Vaish
In the present times the most talked about issues in the knowledge driven economic system are Free and Open Source Software (FOSS) & Intellectual Property Right (IPR), both of which exist at poles apart. The question that prevails that is about the relevance of the either and that of the dominance of each one.
The paper tries to probe into issues of general and specific relevance of FOSS and IPR as suppliers of certain set of utility and benefit to the user. It will also check the validity of the claim of FOSS and it licensing procedure comparing it with the user ship obligations of IPR protected products and services.
The premise to the paper is that both FOSS and IPR have to exist and compliment each other ensuring a strong presence of resources in the public and private domain.
The paper would certainly work on the area to validate the existence of conflict between the FOSS and IPR, or it is a mere false caution raised by groups pursuing either cause.
Finally the paper would propose to demarcate the areas of dominance of FOSS and IPR and prove the utility at the socio-economic front.