3D Interpolation Method for CT Images of the Lung Noriaki Asada, Mayumi Ogura Pages: 1-6
ABSTRACT: A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.
3D Polygon Mesh Compression with Multi Layer Feed Forward Neural Networks Emmanouil Piperakis, Itsuo Kumazawa Pages: 7-11
ABSTRACT: In this paper, an experiment is conducted which proves that multi layer feed forward neural networks are capable of compressing 3D polygon meshes. Our compression method not only preserves the initial accuracy of the represented object but also enhances it. The neural network employed includes the vertex coordinates, the connectivity and normal information in one compact form, converting the discrete and surface polygon representation into an analytic, solid colloquial. Furthermore, the 3D object in its compressed neural form can be directly - without decompression - used for rendering. The neural compression - representation is viable to 3D transformations without the need of any anti-aliasing techniques - transformations do not disrupt the accuracy of the geometry. Our method does not su.er any scaling problem and was tested with objects of 300 to 107 polygons - such as the David of Michelangelo - achieving in all cases an order of O(b3) less bits for the representation than any other commonly known compression method. The simplicity of our algorithm and the established mathematical background of neural networks combined with their aptness for hardware implementation can establish this method as a good solution for polygon compression and if further investigated, a novel approach for 3D collision, animation and morphing.
A Full Scope Nuclear Power Plant Training Simulator: Design and Implementation Experiences Pedro A. Corcuera Pages: 12-17
ABSTRACT: This paper describes the development of a full scope training simulator for a Spanish nuclear power plant. The simulator is based on a client/server architecture that allows the distributed execution in a network with many users to participate in the same simulation. The interface was designed to support the interaction of the operators with the simulator through virtual panels supported by touch screens with high fidelity graphic displays. The simulation environment is described including the extension added to facilitate an easy operation by instructors. The graphical interface has been developed using component software technology. The appropriate selection of hardware for visualization and interaction, in terms of cost and performance, resulted in a facility much less expensive than the classic hard panels replica simulators and, at the same time, able to fulfill most of the training requirements. The main features of the simulator are the distributed execution control of the models and the flexibility of design and maintenance of the interface. The benefits of virtual panels approach are the automatic switch reposition and tagging, configuration flexibility, low maintenance requirements, or capability to support multiple users distributed across the corporate intranet. After exhaustive validation and testing, the training sessions are being conducted successfully.
A Program Recognition and Auto-Testing Approach Wen C. Pai, Chin-Ang Wu Pages: 18-23
ABSTRACT: The goals of the software testing are to assess and improve the quality of the software. An important problem in software testing is to determine whether a program has been tested enough with a testing criterion. To raise a technology to reconstruct the program structure and generating test data automatically will help software developers to improve software quality efficiently. Program recognition and transformation is a technology that can help maintainers to recover the programs’ structure and consequently make software testing properly. In this paper, a methodology to follow the logic of a program and transform to the original program graph is proposed. An approach to derive testing paths automatically for a program to test every blocks of the program is provided. A real example is presented to illustrate and prove that the methodology is practicable. The proposed methodology allows developers to recover the programs’ design and makes software maintenance properly.
Conflicts Analysis for Inter-Enterprise Business Process Model Wei Ding, Zhong Tian, Jian Wang, Jun Zhu, Haiqi Liang, Lei Zhang Pages: 24-29
ABSTRACT: Business process (BP) management systems facilitate the understanding and execution of business processes, which tend to change frequently due to both internal and external change in an enterprise. Therefore, the needs for analysis methods to verify the correctness of business process model is becoming more prominent. One key element of such business process is its control flow. We show how a flow specification may contain certain structural conflicts that could compromise its correct execution. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for target system language. We present a verification approach and algorithm that employs condition reachable matrix to identify structural conflicts in inter-enterprise business process models. The main contribution of the paper is a new technology for identifying structural conflicts and satisfying well-defined correctness criteria in inter-enterprise business process models.
Glasses Removal from Facial Images with Recursive PCA Reconstruction You Hwa Oh, Sang Chul Ahn, Ig-Jae Kim, Yong-Moo Kwon, Hyoung-Gon Kim Pages: 30-34
ABSTRACT: This paper proposes a new glasses removal method from color frontal facial image to generate gray glassless facial image. The proposed method is based on recursive PCA reconstruction. For the generation of glassless images, the occluded region by glasses should be found, and a good reconstructed image to compensate with should be obtained. The recursive PCA reconstruction provides us with both of them simultaneously, and finally produces glassless facial images. This paper shows the effectiveness of the proposed method by some experimental results.
Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform Torsten Palfner, Alexander Mali, Erika Müller Pages: 35-39
ABSTRACT: In this paper, a compression algorithm is introduced which allows the efficient storage and transmission of stereo images. The coder uses a block-based disparity estimation/ compensation technique to decorrelate the image pair. To code both images progressively, we have adapted the wellknown SPIHT coder to stereo images. The results presented in this paper are better than any other results published so far.
Seamlessly and Coherently Locating Interesting Mirrors on the Web Neilze Dorta, Pierre Sens, Christian Khoury Pages: 40-44
ABSTRACT: Nowadays, the World Wide Web is used mostly as a common medium for information sharing. Therefore, locating an object on this large scale dynamic medium tends to be more and more difficult. Content Distribution Networks, e.g. Akamai, and global naming services, e.g. Globe, do more or less than what is required by most users. In this paper, we are interested in discovering, advertising, and transparently locating interesting mirrors of interest to a group of users. Our solution, AR , is user-centric; it uses cooperation among organizations to discover, publicize and locate coherently new mirrors that are of interest to them. Access transparency is achieved through a naming service that manages the different aliases for the same replica. Consistency guarantees are given to each user that no document delivered would be older than the one viewed before. The system scales geographically due to the epidemic and asynchronous nature of the cooperation protocol. We propose a methodology for creating homogeneous groups with common interests, using collected Web traces, then give a glimpse of the potential benefits made by using AR . It opens a path towards making mirroring ubiquitous, hence fostering a better use of the Internet and its resources. A prototype has been implemented in Java and will be used, in the future, in real-world tests for more accurate and realistic results.
The Content-Driven Preprocessor of Images for MPEG-7 Descriptions Jiann-Jone Chen, Cheng-Yi Liu, Feng-Cheng Chang Pages: 45-50
ABSTRACT: An image content-driven (CDP) preprocessor is proposed to activate the right MPEG-7 description tools for the recognized feature contents in one image. It determines automatically whether there are certain feature contents, such as color, texture or shape features, in one image and then performs processing to generate the corresponding descriptors. The CDP's most distinguished characteristic is that there are no redundant computations from the image content categorization down to the descriptor generation. Experiments show that the proposed CDP framework effectively categorizes images with accuracy up to 99%. We also proposed a practical content-based image retrieval (CBIR) system which integrate the CDP framework with a user-friendly MPEG-7 testbed. Simulations of CDP-based CBIR demonstrate that the CDP helps much in improving the subjective retrieval performance. This CBIR framework provide excellent _exibility such that it could be easily adapted to meet speci_c application requirements.
The theoretical framework of agent based monitoring for use in the production of gellan gum in a microbial fermentation system Eleni Mangina, Ioannis Giavasis Pages: 51-56
ABSTRACT: This paper introduces the application of an agent-based software system for monitoring the process of gellan gum production. Gellan gum (biopolymer) is produced in industrial scale in bioreactors (sealed vessels) where the microbial culture is grown in a liquid fermentation medium under controlled environmental conditions (temperature, pH, aeration and agitation). The multi agent system will view the monitoring problem as the interaction of simple independent software entities, for effective use of the available data. The outcome of this agent – based solution will include the automatic on-line data acquisition and correlation of the most important parameters. Within such a dynamic process, like the gellan gum production, certan parameters (such as biomass, gellan and glucose concentration) change continuously and have to be measured and controlled. Also automatic knowledge derivation from past cases through the multi agent software system can be of future benefit.
| | Tweek: Merging 2D and 3D Interaction in Immersive Environments Patrick L Hartling, Allen D Bierbaum, Carolina Cruz-Neira Pages: 57-61
ABSTRACT: Developers of virtual environments (VEs) face an oftendifficult problem: users must have some way to interact with the virtual world. The application designers must determine how to map available inputs (button presses, hand gestures, etc.) to actions within the VE. As a result, interaction within a VE is perhaps the most limiting factor for the development of complex virtual reality (VR) applications. For example, interactions with large amounts of data, alphanumeric information, or abstract operations may not map well to current VR interaction methods, which are primarily spatial. Instead, twodimensional (2D) interaction could be more effective. Current practices often involve the development of customized interfaces for each application. The custom interfaces try to match the capabilities of the available input devices. To address these issues, we have developed a middleware tool called Tweek. Tweek presents users with an extensible 2D Java graphical user interface (GUI) that communicates with VR applications. Using this tool, developers are free to create a GUI that provides extended capabilities for interacting with a VE. This paper covers in detail the design of Tweek and its use with VR Juggler, an open source virtual reality development tool.
The Use of Multicriteria Decision Methods in Planning and Design Cüneyt Elker Pages: 62-67
ABSTRACT: Fields associated with design and physical planning are appropriate domains for the use of multicriteria decision methods. Various methods are compared and “weighted summation” technique is put forward as the most suitable method for the needs of design and planning. The case of city planning is used to illustrate the methodology. The phases of “design of alternatives”, “determination of objectives and criteria” and “evaluation” are described with the help of examples. The paper concludes with principles and problems in the use of multicriteria decision methods in design and planning.
Trend Estimation of Blood Glucose Level Fluctuations Based on Data Mining Masaki Yamaguchi, Shigenori Kambe, Karin Wårdell, Katsuya Yamazaki, Masashi Kobayashi, Nobuaki Honda, Hiroaki Tsutsui, Chosei Kaseda Pages: 68-73
ABSTRACT: We have fabricated calorie-calculating software that calculates and records the total calorific food intake by choosing a meal menu selected using a computer mouse. The purpose of this software was to simplify data collection throughout a person’s normal life, even if they were inexperienced computer operators. Three portable commercial devices have also been prepared a blood glucose monitor, a metabolic rate monitor and a mobile-computer, and linked into the calorie-calculating software. Time-course changes of the blood glucose level, metabolic rate and food intake were measured using these devices during a 3 month period. Based on the data collected in this study we could predict blood glucose levels of the next morning (FBG) by modeling using data mining. Although a large error rate was found for predicting the absolute value, conditions could be found that improved the accuracy of the predicting trends in blood glucose level fluctuations by up to 90 %. However, in order to further improve the accuracy of estimation it was necessary to obtain further details about the patients’ life style or to optimise the input variables that were dependent on each patient rather than collecting data over longer periods.
Internet Speech Recognition Server Miroslav Holada Pages: 74-77
ABSTRACT: The goal of this article is to describe the design of Internet speech recognition server. The reason for building this server is the fact, that communication speed of Intranet and Internet rapidly grows, and we can divide speech recognition process to client and server parts. Such solution would allow a wider use of speech recognition technologies because all users, including those that have relatively obsolete hardware incapable of speech recognition, would be served with speech recognition from the side of our server. The article discuses net data flow reduction, client-server structure and present two demo applications.
Home Automated Telemanagement (HAT) System to Facilitate Self-Care of Patients with Chronic Diseases Joseph Finkelstein, Rajesh Khare, Deepal Vora Pages: 78-82
ABSTRACT: Successful patient self-management requires a multidisciplinary approach that includes regular patient assessment, disease-specific education, control of medication adherence, implementation of health behavior change models and social support. Existing systems for computer-assisted disease management do not provide this multidisciplinary patient support and do not address treatment compliance issues. We developed the Home Automated Telemanagement (HAT) system for patients with different chronic health conditions to facilitate their self-care. The HAT system consists of a home unit, HAT server, and clinician units. Patients at home use a palmtop or a laptop connected with a disease monitor on a regular basis. Each HAT session consists of self-testing, feedback, and educational components. The self-reported symptom data and objective results obtained from disease-specific sensors are automatically sent from patient homes to the HAT server in the hospital. Any web-enabled device can serve as a clinician unit to review patient results. The HAT system monitors self-testing results and patient compliance. The HAT system has been implemented and tested in patients receiving anticoagulation therapy, patients with asthma, COPD and other health conditions. Evaluation results indicated high level of acceptance of the HAT system by the patients and that the system has a positive impact on main clinical outcomes and patient satisfaction with medical care.
Using Simulation to Increase Yields in Chemical Engineering William C. Conley Pages: 83-86
ABSTRACT: Trying to increase the yields or profit or efficiency (less pollution) of chemical processes is a central goal of the chemical engineer in theory and practice. Certainly sound training in chemistry, business and pollution control help the engineer to set up optimal chemical processes. However, the ever changing demands of customers and business conditions, plus the multivariate complexity of the chemical business can make optimization challenging.
Mathematical tools such as statistics and linear programming have certainly been useful to chemical engineers in their pursuit of optimal efficiency. However, some processes can be modeled linearly and some can not. Therefore, presented here will be an industrial chemical process with potentially five variables affecting the yield. Data from over one hundred runs of the process has been collected, but it is not known initially whether the yield relationship is linear or nonlinear. Therefore, the CTSP multivariate correlation coefficient will be calculated for the data to see if a relationship exists among the variables.
Then once it is proven that there is a statistically significant relationship, an appropriate linear or nonlinear equation can be fitted to the data, and it can be optimized for use in the chemical plant.
Fingerprint and Face Identification for Large User Population Teddy Ko, Rama Krishnan Pages: 87-92
ABSTRACT: The main objective of this paper is to present the state-of-the-art of the current biometric (fingerprint and face) technology, lessons learned during the investigative analysis performed to ascertain the benefits of using combined fingerprint and facial technologies, and recommendations for the use of current available fingerprint and face identification technologies for optimum identification performance for applications using large user population. Prior fingerprint and face identification test study results have shown that their identification accuracies are strongly dependent on the image quality of the biometric inputs. Recommended methodologies for ensuring the capture of acceptable quality fingerprint and facial images of subjects are also presented in this paper.
An Adaptive Method For Texture Characterization In Medical Images Implemented on a Parallel Virtual Machine Socrates A. Mylonas Pages: 93-99
ABSTRACT: This paper describes the application of a new texture characterization algorithm for the segmentation of medical ultrasound images. The morphology of these images poses significant problems for the application of traditional image processing techniques and their analysis has been the subject of research for several years. The basis of the algorithm is an optimum signal modelling algorithm (Least Mean Squares-based), which estimates a set of parameters from small image regions. The algorithm has been converted to a structure suitable for implementation on a Parallel Virtual Machine (PVM) consisting of a Network of Workstations (NoW), to improve processing speed. Tests were initially carried out on standard textured images. This paper describes preliminary results of the application of the algorithm in texture discrimination and segmentation of medical ultrasound images. The images examined are primarily used in the diagnosis of carotid plaques, which are linked to the risk of stroke.
Multigradient for Neural Networks for Equalizers Chulhee Lee, Jinwook Go, Heeyoung Kim Pages: 100-104
ABSTRACT: Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.
|