Database System (database + system)

Distribution by Scientific Domains


Selected Abstracts


A 3-D Graphical Database System for Landfill Operations Using GPS

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2002
H. Ping Tserng
Landfill space is an important commodity for landfill companies. It is desirable to develop an efficient tool to assist space management and monitor space consumption. When recyclable wastes or particular waste materials need to be retrieved from the landfill site, the excavation operations become more difficult without an efficient tool to provide waste information (i.e., location and type). In this paper, a methodology and several algorithms are proposed to develop a 3-D graphical database system (GDS) for landfill operations. A 3-D GDS not only monitors the space consumption of a landfill site, but can also provide exact locations and types of compacted waste that would later benefit the landfill excavation operations or recycling programs after the waste is covered. [source]


HATODAS II , heavy-atom database system with potentiality scoring

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2009
Michihiro Sugahara
HATODAS II is the second version of HATODAS (the Heavy-Atom Database System), which suggests potential heavy-atom reagents for the derivatization of protein crystals. The present expanded database contains 3103 heavy-atom binding sites, which is four times more than the previous version. HATODAS II has three new criteria to evaluate the feasibility of the search results: (1) potentiality scoring for the predicted heavy-atom reagents, (2) exclusion of the disordered amino acid residues based on the secondary structure prediction and (3) consideration of the solvent accessibility of amino acid residues from a homology model. In the point mutation option, HATODAS II suggests possible mutation sites into reactive amino acid residues such as Met, Cys and His, on the basis of multiple sequence alignments of homologous proteins. These new features allow the user to make a well informed decision as to the possible heavy-atom derivatization experiments of protein crystals. [source]


A Web-Based Interactive Database System for a Transcranial Doppler Ultrasound Laboratory

JOURNAL OF NEUROIMAGING, Issue 1 2006
Mark J. Gorman MD
ABSTRACT Background. Variations in transcranial Doppler (TCD) examination performance techniques and interpretive paradigms between individual laboratories are a common challenge in the practice of TCD. Demand for rapid access to patient ultrasound examination data and report for use in intensive care settings has necessitated a more flexible approach to data management. Both of these issues may benefit from a computerized approach. Methods. We describe the application of a World Wide Web-based database system for use in an ultrasound laboratory. Results. Databasing information while generating a TCD report is efficient. Web accessibility allows rapid and flexible communication of time-sensitive report information and interpretation for more expeditious clinical decision making. Conclusions. Web-based applications can extend the reach and efficiency of traditionally structured medical laboratories. [source]


What is the impact of missing Indigenous status on mortality estimates?

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 4 2009
An assessment using record linkage in Western Australia
Abstract Background: The analysis aimed to assess the Indigenous status of an increasing number of deaths not coded with a useable Indigenous status from 1997 to 2002 and its impact on reported recent gains in Indigenous mortality. Methods: The Indigenous status of WA death records with a missing Indigenous status was determined based upon data linkage to three other data sources (Hospital Morbidity Database System, Mental Health Information System and Midwives Notification System). Results: Overall, the majority of un-coded cases were assigned an Indigenous status, with 5.9% identified as Indigenous from the M1 series and 7.5% from the M2 series. The significant increase in Indigenous male LE of 5.4 years from 1997 to 2002 decreased to 4.0 and 3.6 years using the M1 and M2 series, respectively, but remained significant. For Indigenous females, the non-significant increase in LE of 1.8 years from 1997 to 2002 decreased to 1.0 and 0.6 years. Furthermore, annual all-cause mortality rates were higher than in the original data for both genders, but the significant decline for males remained. Conclusion: Through data linkage, the increasing proportion of deaths not coded with a useable Indigenous status was shown to impact on Indigenous mortality statistics in Western Australia leading to an overestimate of improvements in life expectancy. Greater attention needs to be given to better identification and recording of Indigenous identifiers if real improvements in health status are to be demonstrated. A system that captures an individual's Indigenous status once and is reflected in all health and administrative data systems needs consideration within Australia. [source]


A 3-D Graphical Database System for Landfill Operations Using GPS

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2002
H. Ping Tserng
Landfill space is an important commodity for landfill companies. It is desirable to develop an efficient tool to assist space management and monitor space consumption. When recyclable wastes or particular waste materials need to be retrieved from the landfill site, the excavation operations become more difficult without an efficient tool to provide waste information (i.e., location and type). In this paper, a methodology and several algorithms are proposed to develop a 3-D graphical database system (GDS) for landfill operations. A 3-D GDS not only monitors the space consumption of a landfill site, but can also provide exact locations and types of compacted waste that would later benefit the landfill excavation operations or recycling programs after the waste is covered. [source]


Semantic knowledge facilities for a web-based recipe database system supporting personalization

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2008
Liping Wang
Abstract The recent explosive proliferation of interesting and useful data over the Web such as various recipes, while providing people with readily available information, brings out a challenging issue on how to manage such non-conventional data effectively. To respond to the challenge, we have been developing a Web-based recipe database system called Dish_Master to manage recipes in a novel way, which not only covers the static recipe attributes but also elucidates the dynamic cooking behaviors. In this paper, we present several semantic knowledge facilities devised in Dish_Master, including a set of semantic modeling and knowledge constructs to effectively represent recipe data, rules and constraints, and user profile aspects. With such a rich set of semantic knowledge facilities, Dish_Master lays down a solid foundation of providing users with personalized services such as adaptation and recommendation. Users can benefit from the system's real-time consultation and automatic summarization of cuisine knowledge. The usefulness and elegance of Dish_Master are demonstrated through an experimental prototype system. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Measuring and modelling the performance of a parallel ODMG compliant object database server

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2006
Sandra de F. Mendes Sampaio
Abstract Object database management systems (ODBMSs) are now established as the database management technology of choice for a range of challenging data intensive applications. Furthermore, the applications associated with object databases typically have stringent performance requirements, and some are associated with very large data sets. An important feature for the performance of object databases is the speed at which relationships can be explored. In queries, this depends on the effectiveness of different join algorithms into which queries that follow relationships can be compiled. This paper presents a performance evaluation of the Polar parallel object database system, focusing in particular on the performance of parallel join algorithms. Polar is a parallel, shared-nothing implementation of the Object Database Management Group (ODMG) standard for object databases. The paper presents an empirical evaluation of queries expressed in the ODMG Query Language (OQL), as well as a cost model for the parallel algebra that is used to evaluate OQL queries. The cost model is validated against the empirical results for a collection of queries using four different join algorithms, one that is value based and three that are pointer based. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Applying XBRL in an Accounting Information System Design Using the REA Approach: An Instructional Case,

ACCOUNTING PERSPECTIVES, Issue 1 2010
JACOB PENG
base de données relationnelles; document d'instance; modélisation REA; XBRL Abstract The Church in Somewhere (CIS) is a small community church which uses an Excel spreadsheet to keep its financial records. The church administrator is considering moving from a spreadsheet accounting system to a relational database system that can easily be expanded to include more information in the future. In this paper we examine the transforming process in this hypothetical case by following a resource-event-agent (REA) modeling paradigm to create a database. We then link the REA model to financial reporting using Microsoft Access. In addition, using the financial report in the database, students prepare and validate an eXtensible Business Reporting Language (XBRL) document for CIS. Instead of applying the complex U.S. Generally Accepted Accounting Principles (GAAP) Taxonomies, Release 2009, the case uses a dedicated CIS Taxonomy to complete the mapping and tagging processes. L'application du XBRL dans la conception d'un système d'information comptable selon le modèle ressource-événement-agent: cas didactique Résumé Church in Somewhere (CIS) est une petite église communautaire qui utilise un tableur Excel pour tenir ses registres financiers. L'administrateur de l'église songe à passer du système comptable du tableur à un système de base de données relationnelles susceptible d'être facilement élargi de manière à recevoir ultérieurement davantage d'informations. Dans ce cas hypothétique, les auteurs examinent le processus de « conversion », en suivant le paradigme du modèle ressource-événement-agent (REA), menant à la création d'une base de données. Ils relient ensuite le modèle REA à l'information financière par le truchement de Microsoft Access. En se servant du rapport financier de la base de données, ils fournissent en outre aux étudiants l'occasion de préparer et de valider un document XBRL pour CIS. Plutôt que d'appliquer les taxonomies complexes des PCGR des États-Unis, édition 2009, les auteurs utilisent dans leur cas une taxonomie propre à CIS pour réaliser les processus de cartographie et de codage. [source]


Handling indefinite and maybe information in logical fuzzy relational databases

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 3 2004
Nan-Chen Hsieh
In this article, fuzzy set theory uses an extension of the classical logical relational database model. A logical fuzzy relational database model was developed with the aim of manipulating imprecise information and adding deduction capabilities to the database system. The essence of this work is the detailed discussion on fuzzy definite, fuzzy indefinite, and fuzzy maybe information and the development of an information theoretical approach of query evaluation on the logical fuzzy relational database. We define redundancies among fuzzy tuples and the operator of their removal. A complete set of fuzzy relational operations in relational algebra and the calculus of linguistically quantified propositions are included also. © 2004 Wiley Periodicals, Inc. [source]


A strategy for adding fuzzy types to an object-oriented database system

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 7 2001
N. Marín
Fuzzy types have been developed as a new way of managing fuzzy structures. With types of this kind, properties are ordered on different levels of precision or amplitude, according to their relationship with the concept represented by the type. In order to implement this new tool, two different strategies can be followed. On the one hand, a new system incorporating fuzzy types as an intrinsic capability can be developed. On the other hand, a new layer that implements fuzzy types can be added to an existing object-oriented database system (OODB). This paper shows how the typical classes of an OODB can be used to represent a fuzzy type and how the mechanisms of instantiation and inheritance can be modeled using this kind of new type on an OODB. © 2001 John Wiley & Sons, Inc. [source]


HATODAS II , heavy-atom database system with potentiality scoring

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2009
Michihiro Sugahara
HATODAS II is the second version of HATODAS (the Heavy-Atom Database System), which suggests potential heavy-atom reagents for the derivatization of protein crystals. The present expanded database contains 3103 heavy-atom binding sites, which is four times more than the previous version. HATODAS II has three new criteria to evaluate the feasibility of the search results: (1) potentiality scoring for the predicted heavy-atom reagents, (2) exclusion of the disordered amino acid residues based on the secondary structure prediction and (3) consideration of the solvent accessibility of amino acid residues from a homology model. In the point mutation option, HATODAS II suggests possible mutation sites into reactive amino acid residues such as Met, Cys and His, on the basis of multiple sequence alignments of homologous proteins. These new features allow the user to make a well informed decision as to the possible heavy-atom derivatization experiments of protein crystals. [source]


Pharmacoepidemiologic study of potential drug interactions in outpatients of a university hospital in Thailand

JOURNAL OF CLINICAL PHARMACY & THERAPEUTICS, Issue 1 2005
B. Janchawee PhD
Summary Background:, Drug,drug interaction is a potential cause of adverse drug reactions. The incidence of such drug interactions in university hospitals in Thailand is unknown. Purpose:, To estimate the rate of potential drug,drug interactions in outpatients of a typical Thai university hospital, and to identify risk factors for such interactions in Thai patients. Methods:, One-year outpatients' prescription data were retrieved from the hospital computer records. Potential drug interactions were identified using the existing drug-interaction database system. Potential interactions within a specific prescription and involving drugs prescribed 1-, 3- and 7-day earlier were searched for. Possible associations between occurrence of an interaction and a patient's age and gender and the number of items on the prescription were explored. Results:, The overall rate of potential drug interactions was 27·9% with a maximal value of 57·8% at the Department of Psychiatry. The rate of the most potentially significant interactions was 2·6%, being the highest in the Department of Medicine (6·0%), with isoniazid vs. rifampin as the most common interacting combination. The rate increased with the patient's age and prescription size (P = 0·000). The odd's ratio of having at least one potential drug interaction was 1·8 (64·2%) when age increased by 20 years (P = 0·000) and 2·8 (165·7%) when another drug was added (P = 0·000). The rate of potential drug interactions was the same for both genders. The rate of potential drug interactions detected across prescriptions was higher than within prescriptions and was dependent on the time interval between prescriptions. Conclusions:, Potential drug interactions were common in our sample of patients. The rate of such interactions increased with the number of drugs prescribed and the patient's age. [source]


A Web-Based Interactive Database System for a Transcranial Doppler Ultrasound Laboratory

JOURNAL OF NEUROIMAGING, Issue 1 2006
Mark J. Gorman MD
ABSTRACT Background. Variations in transcranial Doppler (TCD) examination performance techniques and interpretive paradigms between individual laboratories are a common challenge in the practice of TCD. Demand for rapid access to patient ultrasound examination data and report for use in intensive care settings has necessitated a more flexible approach to data management. Both of these issues may benefit from a computerized approach. Methods. We describe the application of a World Wide Web-based database system for use in an ultrasound laboratory. Results. Databasing information while generating a TCD report is efficient. Web accessibility allows rapid and flexible communication of time-sensitive report information and interpretation for more expeditious clinical decision making. Conclusions. Web-based applications can extend the reach and efficiency of traditionally structured medical laboratories. [source]


BrainProfileDB , a platform for integration of functional genomics data

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 6 2008
Johannes Schuchhardt Dr.
Abstract BrainProfileDB is a database system for integrating large sets of high throughput functional genomics data of the Human Brain Proteome Project (HBPP). Within HBPP (http://www.smp-proteomics.de/) the molecular pathology of neurodegenerative diseases is investigated, using complementary methods from transcriptomics, proteomics, toponomics and interaction measurements. Aim of the database system is to provide a broad spectrum of scientific users joined in the consortium with a practical integrated view on their data. Employing appropriate mapping techniques and levels of data representation the user is relieved from technical details of gene identification or experimental measurement technique. [source]


Systematic interpretation of cyclic nucleotide binding studies using KinetXBase

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 6 2008
Sonja Schweinsberg
Abstract Functional proteomics aims to describe cellular protein networks in depth based on the quantification of molecular interactions. In order to study the interaction of adenosine-3,,5,-cyclic monophosphate (cAMP), a general second messenger involved in several intracellular signalling networks, with one of its respective target proteins, the regulatory (R) subunit of cAMP dependent protein kinase (PKA), a number of different methods was employed. These include fluorescence polarisation (FP), isothermal titration calorimetry (ITC), surface plasmon resonance (SPR), amplified luminescence proximity homogeneous assay (ALPHA-screen), radioligand binding or activity-based assays. Kinetic, thermodynamic and equilibrium binding data of a variety of cAMP derivatives to several cAMP binding domains were integrated in a single database system, we called KinetXBase, allowing for very distinct data formats. KinetXBase is a practical data handling system for molecular interaction data of any kind, providing a synopsis of data derived from different technologies. This supports ongoing efforts in the bioinformatics community to devise formal concepts for a unified representation of interaction data, in order to enable their exchange and easy comparison. KinetXBase was applied here to analyse complex cAMP binding data and highly site-specific cAMP analogues could be identified. The software package is free for download by academic users. [source]


SPLASH: Systematic proteomics laboratory analysis and storage hub

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 6 2006
Siaw Ling Lo
Abstract In the field of proteomics, the increasing difficulty to unify the data format, due to the different platforms/instrumentation and laboratory documentation systems, greatly hinders experimental data verification, exchange, and comparison. Therefore, it is essential to establish standard formats for every necessary aspect of proteomics data. One of the recently published data models is the proteomics experiment data repository [Taylor, C. F., Paton, N. W., Garwood, K. L., Kirby, P. D. et,al., Nat. Biotechnol. 2003, 21, 247,254]. Compliant with this format, we developed the systematic proteomics laboratory analysis and storage hub (SPLASH) database system as an informatics infrastructure to support proteomics studies. It consists of three modules and provides proteomics researchers a common platform to store, manage, search, analyze, and exchange their data. (i),Data maintenance includes experimental data entry and update, uploading of experimental results in batch mode, and data exchange in the original PEDRo format. (ii),The data search module provides several means to search the database, to view either the protein information or the differential expression display by clicking on a gel image. (iii),The data mining module contains tools that perform biochemical pathway, statistics-associated gene ontology, and other comparative analyses for all the sample sets to interpret its biological meaning. These features make SPLASH a practical and powerful tool for the proteomics community. [source]


An automated image-collection system for crystallization experiments using SBS standard microplates

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 2 2007
Erik Brostromer
As part of a structural genomics platform in a university laboratory, a low-cost in-house-developed automated imaging system for SBS microplate experiments has been designed and constructed. The imaging system can scan a microplate in 2,­6,min for a 96-well plate depending on the plate layout and scanning options. A web-based crystallization database system has been developed, enabling users to follow their crystallization experiments from a web browser. As the system has been designed and built by students and crystallographers using commercially available parts, this report is aimed to serve as a do-it-yourself example for laboratory robotics. [source]


Outcome of term breech births: 10-year experience at a district general hospital

BJOG : AN INTERNATIONAL JOURNAL OF OBSTETRICS & GYNAECOLOGY, Issue 2 2005
Poonam Pradhan
Objective To review the short and long term outcomes among singleton infants with breech presentation at term delivered in a geographically defined population over a 10-year period. Design Retrospective, cohort study. Setting District General Hospital. Population 1433 term breech infants alive at the onset of labour and born between January 1991 and December 2000. Methods Data abstracted from birth registers, neonatal discharge summaries and the child health database system were used to compare the short and long term outcomes of singleton term breech infants born by two different modes of delivery (prelabour caesarean section and vaginal or caesarean section in labour). Fisher's exact test was used to compare the categorical variables. Main outcome measures Short term outcomes: perinatal mortality, Apgar scores, admission to the neonatal unit, birth trauma and neonatal convulsions. Long term outcomes: deaths during infancy, cerebral palsy, long term morbidity (development of special needs and special educational needs). Results Of 1433 singleton term infants in breech presentation at onset of labour, 881 (61.5%) were delivered vaginally or by caesarean section in labour and 552 (38.5%) were born by prelabour caesarean section. There were three (0.3%) non-malformed perinatal deaths among infants born by vaginal delivery or caesarean section in labour compared with none in the prelabour caesarean section cohort. Compared with infants born by prelabour caesarean section, those delivered vaginally or by caesarean section in labour were significantly more likely to have low 5-minute Apgar scores (0.9%vs 5.9%, P < 0.0001) and require admission to the neonatal unit (1.6%vs 4%, P= 0.0119). However, there was no significant difference in the long term morbidity between the two groups (5.3% in the vaginal/caesarean section in labour group vs 3.8% in the prelabour caesarean group, P= 0.26); no difference in rates of cerebral palsy; and none of the eight infant deaths were related to the mode of delivery. Conclusions Vaginal breech delivery or caesarean section in labour was associated with a small but unequivocal increase in the short term mortality and morbidity. However, the long term outcome was not influenced by the mode of delivery. [source]


A set-oriented method definition language for object databases and its semantics

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2003
Elisa Bertino
Abstract In this paper we propose a set-oriented rule-based method definition language for object-oriented databases. Most existing object-oriented database systems exploit a general-purpose imperative object-oriented programming language as the method definition language. Because methods are written in a general-purpose imperative language, it is difficult to analyze their properties and to optimize them. Optimization is important when dealing with a large amount of objects as in databases. We therefore believe that the use of an ad hoc, set-oriented language can offer some advantages, at least at the specification level. In particular, such a language can offer an appropriate framework to reason about method properties. In this paper, besides defining a set-oriented rule-based language for method definition, we formally define its semantics, addressing the problems of inconsistency and non-determinism in set-oriented updates. Moreover, we characterize some relevant properties of methods, such as conflicts among method specifications in sibling classes and behavioral refinement in subclasses. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Modeling Network Latency and Parallel Processing in Distributed Database Design

DECISION SCIENCES, Issue 4 2003
Jesper M. Johansson
ABSTRACT The design of responsive distributed database systems is a key concern for information systems managers. In high bandwidth networks latency and local processing are the most significant factors in query and update response time. Parallel processing can be used to minimize their effects, particularly if it is considered at design time. It is the judicious replication and placement of data within a network that enable parallelism to be effectively used. However, latency and parallel processing have largely been ignored in previous distributed database design approaches. We present a comprehensive approach to distributed database design that develops efficient combinations of data allocation and query processing strategies that take full advantage of parallelism. We use a genetic algorithm to enable the simultaneous optimization of data allocation and query processing strategies. We demonstrate that ignoring the effects of latency and parallelism at design time can result in the selection of unresponsive distributed database designs. [source]


Toward a Continuous Quality Improvement Paradigm for Hemodialysis Providers with Preliminary Suggestions for Clinical Practice Monitoring and Measurement

HEMODIALYSIS INTERNATIONAL, Issue 1 2003
Edmund G. Lowrie
Background: Consensus processes using the clinical literature as the primary source for information generally drive projects to draft clinical practice guidelines (CPGs). Most such literature citations describe special projects that are not part of an organized quality management initiative, and the publication/review/consensus process tends to be long. This project describes an initiative to develop and explore a flexible and dedicated data-driven paradigm for deciding new CPGs that could be rapidly responsive to changing medical knowledge and practice. Methods: Candidate Clinical Practice Monitoring Measures (CPMM) were selected using a large, national database according to the natures and strengths of their associations with mortality risk among patients during 1994. Thresholds above or below which risk of death increased were evaluated for each CPMM using risk profile charts and spline functions. The fractions of patients outside of those thresholds in each dialysis unit (the %Var) were determined for the years 1993, 1994, and 1995. A standardized mortality ratio (SMR) was also determined for each year for each facility. The associations between the %Var and SMR were evaluated in several single-variable and multivariable statistical models. Results: Eleven CPMM were selected and evaluated based on their associations with death risk. These included the urea clearance x dialysis time product (Kt); the concentrations of albumin, potassium, phosphate, bicarbonate, hemoglobin, neutrophils, and lymphocytes in the blood; the body weight/height ratio; diastolic blood pressure; and vascular access type. Even though the CPMM were strongly associated with death risk among patients, the %Var were weakly and inconsistently associated with SMR among facilities. Conclusions: The paradigm was flexible, easy to implement, quickly executed, and potentially able to accommodate evolving medical practice assuming the availability of large database systems such as this. The primary associates of death risk were easily identified and the thresholds easily adopted. The SMR and %Var from the CPMM were only weakly associated, however, suggesting that one cannot be reliably predicted from the other. As such, quality management programs should likely monitor both the processes and outcomes of care among dialysis facilities. [source]


Fuzzy extensions for relationships in a generalized object model

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 7 2001
Valerie V. Cross
Numerous approaches for introducing and managing uncertainty in object-oriented models have been proposed. This paper examines various semantics of uncertainty and the interaction with three kinds of relationships inherent to object models: for instance-of, a-kind-of, and a category. A generalized object model incorporating the perspective of semantic data modeling, artificial intelligence, and database systems is the basis for the recommendations for fuzzy extensions to these three kinds of relationships. © 2001 John Wiley & Sons, Inc. [source]