Home About us Contact | |||
Complex Data (complex + data)
Terms modified by Complex Data Selected AbstractsNonparametric harmonic regression for estuarine water quality dataENVIRONMETRICS, Issue 6 2010Melanie A. Autin Abstract Periodicity is omnipresent in environmental time series data. For modeling estuarine water quality variables, harmonic regression analysis has long been the standard for dealing with periodicity. Generalized additive models (GAMs) allow more flexibility in the response function. They permit parametric, semiparametric, and nonparametric regression functions of the predictor variables. We compare harmonic regression, GAMs with cubic regression splines, and GAMs with cyclic regression splines in simulations and using water quality data collected from the National Estuarine Reasearch Reserve System (NERRS). While the classical harmonic regression model works well for clean, near-sinusoidal data, the GAMs are competitive and are very promising for more complex data. The generalized additive models are also more adaptive and require less-intervention. Copyright © 2009 John Wiley & Sons, Ltd. [source] A new data model for XML databasesINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2002Richard Ho The widespread activity involving the Internet and the Web causes large amounts of electronic data to be generated every day. This includes, in particular, semi-structured textual data such as electronic documents, computer programs, log files, transaction records, literature citations, and emails. Storing and manipulating the data thus produced has proven difficult. As conventional DBMSs are not suitable for handling semi-structured data, there is a strong demand for systems that are capable of handling large volumes of complex data in an efficient and reliable way. The Extensible Markup Language (XML) provides such solution. In this paper, we present the concept of a ,vertical view model' and its uses as a mapping mechanism for converting complex XML data to relational database tables, and as a standalone data model for storing complex XML data. Copyright © 2003 John Wiley & Sons, Ltd. [source] Transcoding media for bandwidth constrained mobile devicesINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 2 2005Kevin Curran Bandwidth is an important consideration when dealing with streaming media. More bandwidth is required for complex data such as video as opposed to a simple audio file. When delivering streaming media, sufficient bandwidth is required to achieve an acceptable level of performance. If the information streamed exceeds the bandwidth capacity of the client the result will be ,choppy' and incomplete with possible loss of transmission. Transcoding typically refers to the adaptation of streaming content. Typical transcoding scenarios exploit content-negotiation to negotiate between different formats in order to obtain the most optimal combination of requested quality and available resources. It is possible to transcode media to a lesser quality or size upon encountering adverse bandwidth conditions. This can be accomplished without the need to encode multiple versions of the same file at differing quality levels. This study investigates the capability of transcoding for coping with restrictions in client devices. In addition, the properties of transcoded media files are examined and evaluated to determine their applicability for streaming in relation to a range of broad device types capable of receiving streaming media.,Copyright © 2005 John Wiley & Sons, Ltd. [source] Comparison between two PCR-based bacterial identification methods through artificial neural network data analysisJOURNAL OF CLINICAL LABORATORY ANALYSIS, Issue 1 2008Jie Wen Abstract The 16S ribosomal ribonucleic acid (rRNA) and 16S-23S rRNA spacer region genes are commonly used as taxonomic and phylogenetic tools. In this study, two pairs of fluorescent-labeled primers for 16S rRNA genes and one pair of primers for 16S-23S rRNA spacer region genes were selected to amplify target sequences of 317 isolates from positive blood cultures. The polymerase chain reaction (PCR) products of both were then subjected to restriction fragment length polymorphism (RFLP) analysis by capillary electrophoresis after incomplete digestion by Hae III. For products of 16S rRNA genes, single-strand conformation polymorphism (SSCP) analysis was also performed directly. When the data were processed by artificial neural network (ANN), the accuracy of prediction based on 16S-23S rRNA spacer region gene RFLP data was much higher than that of prediction based on 16S rRNA gene SSCP analysis data(98.0% vs. 79.6%). This study proved that the utilization of ANN as a pattern recognition method was a valuable strategy to simplify bacterial identification when relatively complex data were encountered. J. Clin. Lab. Anal. 22:14,20, 2008. © 2008 Wiley-Liss, Inc. [source] Magnitude image CSPAMM reconstruction (MICSR)MAGNETIC RESONANCE IN MEDICINE, Issue 2 2003Moriel NessAiver Abstract Image reconstruction of tagged cardiac MR images using complementary spatial modulation of magnetization (CSPAMM) requires the subtraction of two complex datasets to remove the untagged signal. Although the resultant images typically have sharper and more persistent tags than images formed without complementary tagging pulses, handling the complex data is problematic and tag contrast still degrades significantly during diastole. This article presents a magnitude image CSPAMM reconstruction (MICSR) method that is simple to implement and produces images with improved contrast and tag persistence. The MICSR method uses only magnitude images , i.e., no complex data , but yields tags with zero mean, sinusoidal profiles. A trinary display of MICSR images emphasizes their long tag persistence and demonstrates a novel way to visualize myocardial deformation. MICSR contrast and contrast-to-noise ratios (CNR) were evaluated using simulations, a phantom, and two normal volunteers. Tag contrast 1000 msec after the R wave trigger was 3.0 times better with MICSR than with traditional CSPAMM reconstruction techniques, while CNRs were 2.0 times better. Magn Reson Med 50:331,342, 2003. © 2003 Wiley-Liss, Inc. [source] Bayesian methods for proteomicsPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 16 2007Gil Alterovitz Dr. Abstract Biological and medical data have been growing exponentially over the past several years [1, 2]. In particular, proteomics has seen automation dramatically change the rate at which data are generated [3]. Analysis that systemically incorporates prior information is becoming essential to making inferences about the myriad, complex data [4,6]. A Bayesian approach can help capture such information and incorporate it seamlessly through a rigorous, probabilistic framework. This paper starts with a review of the background mathematics behind the Bayesian methodology: from parameter estimation to Bayesian networks. The article then goes on to discuss how emerging Bayesian approaches have already been successfully applied to research across proteomics, a field for which Bayesian methods are particularly well suited [7,9]. After reviewing the literature on the subject of Bayesian methods in biological contexts, the article discusses some of the recent applications in proteomics and emerging directions in the field. [source] Metabolomics: Current technologies and future trendsPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 17 2006Katherine Hollywood Abstract The ability to sequence whole genomes has taught us that our knowledge with respect to gene function is rather limited with typically 30,40% of open reading frames having no known function. Thus, within the life sciences there is a need for determination of the biological function of these so-called orphan genes, some of which may be molecular targets for therapeutic intervention. The search for specific mRNA, proteins, or metabolites that can serve as diagnostic markers has also increased, as has the fact that these biomarkers may be useful in following and predicting disease progression or response to therapy. Functional analyses have become increasingly popular. They include investigations at the level of gene expression (transcriptomics), protein translation (proteomics) and more recently the metabolite network (metabolomics). This article provides an overview of metabolomics and discusses its complementary role with transcriptomics and proteomics, and within system biology. It highlights how metabolome analyses are conducted and how the highly complex data that are generated are analysed. Non-invasive footprinting analysis is also discussed as this has many applications to in,vitro cell systems. Finally, for studying biotic or abiotic stresses on animals, plants or microbes, we believe that metabolomics could very easily be applied to large populations, because this approach tends to be of higher throughput and generally lower cost than transcriptomics and proteomics, whilst also providing indications of which area of metabolism may be affected by external perturbation. [source] |