A database of PIXE data, which have been accumulated at NMCC, has been constructed. In order to fill up the database, data are newly obtained as many as possible for the kind of samples whose number is small. In addition, the data for different measuring conditions are obtained for several samples. As the number of γ-ray spectrum obtained with a HPGe detector for the purpose of analyzing light elements such as fluorine, is overwhelmingly small in comparison with that of usual PIXE spectra, γ-ray spectrum and elemental concentration of fluorine are obtained as many as possible for food, environmental and hair samples. In addition, the data taken with an in-air PIXE system have been obtained for various samples. As a result, the database involving contents over various research fields is constructed, and it is expected to be useful for researches who make use of analytical techniques. It is expected that this work will give a start to many researchers to participate in the database and to make calibration with each other in order to establish reliable analytical techniques. Moreover, the final goal of the database is to establish the control concentration values for typical samples. As the first step of establishing the control values, average elemental concentration and its standard deviations in hair samples taken from 405 healthy Japanese are obtained and tabulated according to their sex and age.
A commercial application of the World-Wide Web concepts is described. It is shown how it is possible to solve a real customer problem in a rapid and cost-effective way by means of the WWW framework. The application has been developed at the Library Center of the University of Bologna (CIB).
Let A and B be two groups of up to n elements distributed on the first row of an n × n reconfigurable mesh, and CA,B a subset of the cartesian product A × B satisfying some unknown condition C. Only one broadcasting step is needed in order to compute CA,B's elements. However, the problem of moving CA,B's elements to the first row in optimal time (so that they can be further processed) is not trivial. The conditional cartesian product (CCP) problem is to move CA,B's elements to the first row in steps. This requires optimizing the cartesian product operation such that CA,B's elements will be optimally scattered in the mesh, so that O(n) elements can be retrieved in a single step as opposed to
elements needed if the cartesian product is not optimized). We give a deterministic algorithm that for any A, B, C solves this problem in
steps, and an "adaptive" randomized algorithm whose optimality is verified by experimental results. Note that the CCP is a case where we overcome the inherent limitation of the reconfigurable mesh, namely, the inability to perform fast routing of packets located in a small area.
We also present the model of production systems, in which computation is realized by executing cartesian productions of subsets in a common element space. Production systems are useful for database applications, expert systems, and can even be used as a general parallel programming language. Solving the CCP problem allows us to devise an efficient implementation for production systems on the reconfigurable mesh. In this way the reconfigurable mesh is shown to be an attractive architecture for database machines and for parallel programming as well.
We present a simple systolic algorithm for implementing dictionary machine based on the VLSI technology. Our design makes use of a dynamic. global tree rebalancing scheme to attain high system throughput. Our scheme is simple to implement and requires low sophistication in the design of processing nodes. Results from analysis and simulation show that our algorithm has optimal response time and achieves an average latency close to 1. This represents a significant improvement over many of the previous designs. Unlike most parallel dictionary machines reported in the literature, our approach requires no compression operations.
HEP collaborations are deploying grid technologies to address petabyte-scale data processing challenges. In addition to file-based event data, HEP data processing requires access to terabytes of non-event data (detector conditions, calibrations, etc.) stored in relational databases. Inadequate for non-event data delivery in these amounts, database access control technologies for grid computing are limited to encrypted message transfers. To overcome these database access limitations one must go beyond the existing grid infrastructure. A proposed hyperinfrastructure of distributed database services implements efficient secure data access methods. We introduce several technologies laying a foundation of a new hyperinfrastructure. We present efficient secure data transfer methods and secure grid query engine technologies federating heterogeneous databases. Lessons learned in a production environment of ATLAS Data Challenges are presented.
This paper proposes the utility of texture and color for iris recognition systems. It contributes for improvement of system accuracy with reduced feature vector size of just 1 × 3 and reduction of false acceptance rate (FAR) and false rejection rate (FRR). It avoids the iris normalization process used traditionally in iris recognition systems. Proposed method is compared with the existing methods. Experimental results indicate that the proposed method using only color achieves 99.9993 accuracy, 0.0160 FAR, and 0.0813 FRR. Computational time efficiency achieved is of 947.7 ms.
Single sample face recognition (SSFR) is a challenging research problem in which only one face image per person is available for training. Moreover, the face image may have different pose, expression, illumination, occlusion etc. rendering this problem more complex. Several methods have been suggested by various researchers in literature to solve SSFR. Here, we provide a comprehensive review of the methods proposed in the last decade for solving SSFR problem and introduce a novel taxonomy for the same. We divide SSFR methods broadly into five categories viz. (i) feature based, (ii) virtual sample generation based, (iii) generic database based, (iv) Hybrid and (v) other methods. We have also briefly reviewed the face databases used for evaluating single sample face recognition methods. Furthermore, the performance of the methods has been analyzed in terms of classification accuracy as given in literature. At last, we also suggest some future direction to the researchers and practitioners working in this fascinating research area.
Providing efficient mining algorithm to discover recent frequent XML user query patterns is crucial, as many applications use XML to represent data in their disciplines over the Internet. These recent frequent XML user query patterns can be used to design an index mechanism or cached and thus enhance XML query performance. Several XML query pattern stream mining algorithms have been proposed to record user queries in the system and thus discover the recent frequent XML query patterns over a stream. By using these recent frequent XML query patterns, the query performance of XML data stream is improved. In this paper, user queries are modeled as a stream of XML queries and the recent frequent XML query patterns are thus mined over the stream. Data-stream mining differs from traditional data mining since its input of mining is data streams, while the latter focuses on mining static databases. To facilitate the one-pass mining process, novel schemes (i.e. XstreamCode and XstreamList) are devised in the mining algorithm (i.e. X2StreamMiner) in this paper. X2StreamMiner not only reduces the memory space, but also improves the mining performance. The simulation results also show that X2StreamMiner algorithm is both efficient and scalable. There are two major contributions in this paper. First, the novel schemes are proposed to encode and store the information of user queries in an XML query stream. Second, based on the two schemes, an efficient XML query stream mining algorithm, X2StreamMiner, is proposed to discover the recent frequent XML query patterns.
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: (1) if policies are complex, their enforcement can lead to performance decay of database servers; (2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Hierarchical metric-space clustering methods have been commonly used to organize proteomes into taxonomies. Consequently, it is often anticipated that hierarchical clustering can be leveraged as a basis for scalable database index structures capable of managing the hyper-exponential growth of sequence data. M-tree is one such data structure specialized for the management of large data sets on disk.
We explore the application of M-trees to the storage and retrieval of peptide sequence data. Exploiting a technique first suggested by Myers, we organize the database as records of fixed length substrings. Empirical results are promising. However, metric-space indexes are subject to "the curse of dimensionality" and the ultimate performance of an index is sensitive to the quality of the initial construction of the index. We introduce new hierarchical bulk-load algorithm that alternates between top-down and bottom-up clustering to initialize the index. Using the Yeast Proteomes, the bi-directional bulk load produces a more effective index than the existing M-tree initialization algorithms.
This paper presents a discussion of significant issues in selection of a standardized set of the “best” software metrics to support a software reuse program. This discussion illustrates the difficulty in selection of a standardized set of reuse metrics because the “best” reuse metrics are determined by unique characteristics of each reuse application. An example of the selection of a single set of reuse metrics for a specific management situation is also presented.
In the domain of law, various real situations are expressed as relations and/or combinations of legal knowledge items (legal concepts, articles of law, etc). Such knowledge items (legal facts, events) cannot be precisely defined. Legal judgement is performed based on resemblance of legal knowledge and facts. In our system, vague legal knowledge is saved in the fuzzy relational database, and legal inference is realized as fuzzy inference. The target law for this system is the United Nations Convention on Contracts for the International Sale of Goods (CISG).
Both response time and processing time become recently big problems in the processing time of various online and batch systems. Especially, the access efficiency of the database should be greatly controlled as for the speed of the processing time of accesses to the database. Because records are inserted far from their logical position, the storages of deleted records occupy some diverse spaces in the database. Then, to cover the weak point of the database, we execute the database reorganization at suitable times to achieve a good performance requirement for the application. There are two types purposes of the database reorganization: The purpose of physical reorganization is to optimize the database storage and to improve the database structure. However, as the database has usually access locality, its data structure may deterirate in limited parts of the storage space. Thus, we adop the partial reorganization. This reorganizes only locally structurally deteriorated space in the database, while the structural efficiency can be recovered similarly to the full reorganization. This paper considers two structural deteriorations which increase with time and occur independently in time. When the amount of deterioration is estimated at periodic time and at a specified time, the expected cost rates are obtained, using the cumulative damage model, and optimal policies which minimize them are discussed and analytically. We compute optimal policies for two models and compare them numeically.
In order to support the interaction betweeen coexisting traditional short transactions and long cooperative transactions, we propose a novel timestamp ordering approach. With this timestamp ordering method, short transactions can be processed in the traditional way, as if there were no cooperative transactions. Therefore they will not be blocked by cooperative transactions. Cooperative transactions will not be aborted when there is a conflict with short transactions. Rather, they will incorporate the recent updates into their own processing. The serializabilities, among short transactions, and between a cooperative transaction (group) and other short transactions, are all preserved.
Research Facility at Macquarie University Joins International Proteomic Alliance.
Public Databases have Errors and no Quick Fix.
Genetic Viability of Australian Grasslands and Heathlands.
China Drafts Biotechnology Accord with Brazil.
China Calls for Closer Cooperation with US in Biomedical Research.
Opportunities in China's Biomedicine Industry.
China Aims to Be Leading Pharmacy Processing Center.
Geneticist Tsui Lap Chee Appointed as Vice Chancellor of University of Hong Kong.
India to Start Growing Pest-resistant Cotton.
Health Ministry Confirms Fourth Mad Cow Case.
Importance of Japanese Funding to Rice Genomics.
Bio Expo Korea 2002 Scheduled for September.
Korea Establishes an Arctic Science Base.
The Sciences not Popular among Korean Students.
Soya Bean the Only GM Food in the Malaysian Market.
Fear Prevents AIDS Patients from Getting Cheaper Treatment.
Biotech Industry Receives Big Funding.
Genome Institute of Singapore Moves to New Facility.
Scientist Lays out his Vision for the Life Sciences in Singapore.
Industry Conference on Lab Design and Management.
Singapore and University of Washington form Bioengineering Alliance.
Taiwan Hsinchu Science Park's March Trade Highest in One Year.
New Drug for ED Going on Sale Soon.
Traders Urged to Market Higher Quality Rice.
Hybrid Rice Meeting in Vietnam.
The article is about the rice functional genomics. There are details on the generation of T-DNA insertional mutant lines in rice, insertional lines for the isolation of mutants, how T-DNA vector can be used for entrapping tissue-specific genes and promoters, and the generation of database for T-DNA flanking sequences.
Bioinformatics plays an important role for in the research and development of the life science and biotechnology. This paper intends to give an overview of the activities of bioinformatics service, research and education at the Center of Bioinformatics, Peking University; the national node of the European Molecular Biology Network and the Asia Pacific Bioinformatics Network.
The Oracle relational database management system, with object-oriented extensions and numerous application-driven enhancements, plays a critical role worldwide in managing the exploding volumes of bioinformatics data. There are many features of the Oracle product which support the bioinformatics community directly already and there are several features which could be exploited more thoroughly by users, service vendors, and Oracle itself to extend that level of support. This paper will present an overview of Oracle features which support storage of bioinformatics data and will discuss extensibility features which give the product room to grow. Some attention will be given to Oracle's own efforts to use that extensibility to exploit emerging standardization of many of the complex data and computation requirements of the life sciences.
Palm-Sized PCR Device for Rapid Real-Time Detection of Viruses.
Scientists Uncover New Mechanism for Diabetic Neuropathy.
Chi Med Initiates a Phase I/II Clinical Trial of Novel FGFR Inhibitor HMPL 453 in China.
Database Boosts Shanghai’s Technology Aim.
Experts Emphasize Scientific and Technological Innovations in Agriculture.
China Enlists AI to Diagnose Breast Cancer.
Study Offers Clue to Memory Formation in the Brain.
China Signed Science Cooperation Agreement with Bolivia.
Biotechnology in China Hits 4 Trillion RMB in 2016.
A Novel Pathway: Adult Hippocampal Neurogenesis Linked to Depression Caused by Inflammation.
BGI Genomics Announces Pricing of Initial Public Offering.
This article addresses the problem of standard Romanization of Arabic names using undiacritized-Arabic forms and their corresponding non-standard Romanization. The Romanization of Arabic names has long been studied and standardized. Huge amounts of non-standard Arabic databases of Romanized names exist that are in use in many private and government agencies. Examples of such applications are passport name holder databases, phone directories, and geographic names databases. Dealing with such databases can be inefficient and can produce inconsistent results. Converting such databases into their standard Romanization can help in solving these problems.
In this paper, we present an efficient algorithmic software implementation which produces standard Romanization of Arabic alphabet name presentation by utilizing the hints in the existing non-standard Romanized databases. The results of the software implementation have proven to be very promising.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.