Simulated

Distribution by Scientific Domains
Distribution within Engineering

Terms modified by Simulated

  • simulated annealing
  • simulated annealing algorithm
  • simulated body fluid
  • simulated canal
  • simulated condition
  • simulated data
  • simulated data set
  • simulated distribution
  • simulated driving
  • simulated example
  • simulated grazing
  • simulated image
  • simulated maximum likelihood
  • simulated moving
  • simulated patient
  • simulated population
  • simulated precipitation
  • simulated radiation
  • simulated rainfall
  • simulated response
  • simulated result
  • simulated runoff
  • simulated scenario
  • simulated setting
  • simulated sunlight
  • simulated value

  • Selected Abstracts


    Palaeomorphology: fossils and the inference of cladistic relationships

    ACTA ZOOLOGICA, Issue 1 2010
    Gregory D. Edgecombe
    Abstract Edgecombe, G.D. 2010. Palaeomorphology: fossils and the inference of cladistic relationships. ,Acta Zoologica (Stockholm) 91: 72,80 Twenty years have passed since it was empirically demonstrated that inclusion of extinct taxa could overturn a phylogenetic hypothesis formulated upon extant taxa alone, challenging Colin Patterson's bold conjecture that this phenomenon ,may be non-existent'. Suppositions and misconceptions about missing data, often couched in terms of ,wildcard taxa' and ,the missing data problem', continue to cloud the literature on the topic of fossils and phylogenetics. Comparisons of real data sets show that no a priori (or indeed a posteriori) decisions can be made about amounts of missing data and most properties of cladograms, and both simulated and real data sets demonstrate that even highly incomplete taxa can impact on relationships. The exclusion of fossils from phylogenetic analyses is neither theoretically nor empirically defensible. [source]


    Close range digital photogrammetric analysis of experimental drainage basin evolution

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 3 2003
    J. Brasington
    Abstract Despite the difficulties of establishing formal hydraulic and geometric similarity, small-scale models of drainage basins have often been used to investigate the evolution and dynamics of larger-scale landforms. Historically, this analysis has been restricted to planform basin characteristics and only in the last decade has the topographic similarity of experimental landscapes been explored through explicitly three-dimensional parameters such as the distributions of cumulative drainage area, area,slope and catchment elevation. The current emphasis on three-dimensional morphometry reflects a growing awareness of the descriptive paucity of planform data and the need for more robust analysis of spatial scaling relationships. This paradigm shift has been significantly facilitated by technological developments in topographic survey and digital elevation modelling (DEM) which now present the opportunity to acquire and analyse high-resolution, distributed elevation data. Few studies have, however, attempted to use topographic modelling to provide information on the changing pattern and rate of sediment transport though an evolving landscape directly by using multitemporal DEM differencing techniques. This paper reports a laboratory study in which digital photogrammetry was employed to derive high-resolution DEMs of a simulated landscape in declining equilibrium at 15 minute frequency through a 240 minute simulation. Detailed evaluation of the DEMs revealed a vertical precision of 1·2 mm and threshold level of change detection between surfaces of ±3 mm at the 95 per cent confidence level. This quality assurance set the limits for determining the volumetric change between surfaces, which was used to recover the sediment budget through the experiment and to examine local - and basin-scale rates of sediment transport. A comparison of directly observed and morphometric estimates of sediment yield at the basin outlet was used to quantify the closure of the sediment budget over the simulation, and revealed an encouragingly small 6·2 per cent error. The application of this dynamic morphological approach has the potential to offer new insights into the controls on landform development, as demonstrated here by an analysis of the changing pattern of the basin sediment delivery ratio during network growth. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Semi-empirical model for site effects on acceleration time histories at soft-soil sites.

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 13 2004
    Part 2: calibration
    Abstract A previously developed simplified model of ground motion amplification is applied to the simulation of acceleration time histories at several soft-soil sites in the Valley of Mexico, on the basis of the corresponding records on firm ground. The main objective is to assess the ability of the model to reproduce characteristics such as effective duration, frequency content and instantaneous intensity. The model is based on the identification of a number of parameters that characterize the complex firm-ground to soft-soil transfer function, and on the adjustment of these parameters in order to account for non-linear soil behavior. Once the adjusted model parameters are introduced, the statistical properties of the simulated and the recorded ground motions agree reasonably well. For the sites and for the seismic events considered in this study, it is concluded that non-linear soil behavior may have a significant effect on the amplification of ground motion. The non-linear soil behavior significantly affects the effective ground motion duration for the components with the higher intensities, but it does not have any noticeable influence on the lengthening of the dominant ground period. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Variable smoothing in Bayesian intrinsic autoregressions

    ENVIRONMETRICS, Issue 8 2007
    Mark J. Brewer
    Abstract We introduce an adapted form of the Markov random field (MRF) for Bayesian spatial smoothing with small-area data. This new scheme allows the amount of smoothing to vary in different parts of a map by employing area-specific smoothing parameters, related to the variance of the MRF. We take an empirical Bayes approach, using variance information from a standard MRF analysis to provide prior information for the smoothing parameters of the adapted MRF. The scheme is shown to produce proper posterior distributions for a broad class of models. We test our method on both simulated and real data sets, and for the simulated data sets, the new scheme is found to improve modelling of both slowly-varying levels of smoothness and discontinuities in the response surface. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Finding starting points for Markov chain Monte Carlo analysis of genetic data from large and complex pedigrees

    GENETIC EPIDEMIOLOGY, Issue 1 2003
    Yuqun Luo
    Abstract Genetic data from founder populations are advantageous for studies of complex traits that are often plagued by the problem of genetic heterogeneity. However, the desire to analyze large and complex pedigrees that often arise from such populations, coupled with the need to handle many linked and highly polymorphic loci simultaneously, poses challenges to current standard approaches. A viable alternative to solving such problems is via Markov chain Monte Carlo (MCMC) procedures, where a Markov chain, defined on the state space of a latent variable (e.g., genotypic configuration or inheritance vector), is constructed. However, finding starting points for the Markov chains is a difficult problem when the pedigree is not single-locus peelable; methods proposed in the literature have not yielded completely satisfactory solutions. We propose a generalization of the heated Gibbs sampler with relaxed penetrances (HGRP) of Lin et al., ([1993] IMA J. Math. Appl. Med. Biol. 10:1,17) to search for starting points. HGRP guarantees that a starting point will be found if there is no error in the data, but the chain usually needs to be run for a long time if the pedigree is extremely large and complex. By introducing a forcing step, the current algorithm substantially reduces the state space, and hence effectively speeds up the process of finding a starting point. Our algorithm also has a built-in preprocessing procedure for Mendelian error detection. The algorithm has been applied to both simulated and real data on two large and complex Hutterite pedigrees under many settings, and good results are obtained. The algorithm has been implemented in a user-friendly package called START. Genet Epidemiol 25:14,24, 2003. © 2003 Wiley-Liss, Inc. [source]


    A Surface-Based Approach to Measuring Spatial Segregation

    GEOGRAPHICAL ANALYSIS, Issue 2 2007
    David O'Sullivan
    Quantitative indices of residential segregation have been with us for half a century, but suffer significant limitations. While useful for comparison among regions, summary indices fail to reveal spatial aspects of segregation. Such measures generally consider only the population mix within zones, not between them. Zone boundaries are treated as impenetrable barriers to interaction between population subgroups, so that measurement of segregation is constrained by the zoning system, which bears no necessary relation to interaction among population subgroups. A segregation measurement approach less constrained by the chosen zoning system, which enables visualization of segregation levels at the local scale and accounts for the spatial dimension of segregation, is required. We propose a kernel density estimation approach to model spatial aspects of segregation. This provides an explicitly geographical framework for modeling and visualizing local spatial segregation. The density estimation approach lends itself to development of an index of spatial segregation with the advantage of functional compatibility with the most widely used index of segregation (the dissimilarity index D). We provide a short review of the literature on measuring segregation, briefly describe the kernel density estimation method, and illustrate how the method can be used for measuring segregation. Examples using a simulated landscape and two empirical cases in Washington, DC and Philadelphia, PA are presented. [source]


    The design of an optimal filter for monthly GRACE gravity models

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2008
    R. Klees
    SUMMARY Most applications of the publicly released Gravity Recovery and Climate Experiment monthly gravity field models require the application of a spatial filter to help suppressing noise and other systematic errors present in the data. The most common approach makes use of a simple Gaussian averaging process, which is often combined with a ,destriping' technique in which coefficient correlations within a given degree are removed. As brute force methods, neither of these techniques takes into consideration the statistical information from the gravity solution itself and, while they perform well overall, they can often end up removing more signal than necessary. Other optimal filters have been proposed in the literature; however, none have attempted to make full use of all information available from the monthly solutions. By examining the underlying principles of filter design, a filter has been developed that incorporates the noise and full signal variance,covariance matrix to tailor the filter to the error characteristics of a particular monthly solution. The filter is both anisotropic and non-symmetric, meaning it can accommodate noise of an arbitrary shape, such as the characteristic stripes. The filter minimizes the mean-square error and, in this sense, can be considered as the most optimal filter possible. Through both simulated and real data scenarios, this improved filter will be shown to preserve the highest amount of gravity signal when compared to other standard techniques, while simultaneously minimizing leakage effects and producing smooth solutions in areas of low signal. [source]


    Role of land cover changes for atmospheric CO2 increase and climate change during the last 150 years

    GLOBAL CHANGE BIOLOGY, Issue 8 2004
    Victor Brovkin
    Abstract We assess the role of changing natural (volcanic, aerosol, insolation) and anthropogenic (CO2 emissions, land cover) forcings on the global climate system over the last 150 years using an earth system model of intermediate complexity, CLIMBER-2. We apply several datasets of historical land-use reconstructions: the cropland dataset by Ramankutty & Foley (1999) (R&F), the HYDE land cover dataset of Klein Goldewijk (2001), and the land-use emissions data from Houghton & Hackler (2002). Comparison between the simulated and observed temporal evolution of atmospheric CO2 and ,13CO2 are used to evaluate these datasets. To check model uncertainty, CLIMBER-2 was coupled to the more complex Lund,Potsdam,Jena (LPJ) dynamic global vegetation model. In simulation with R&F dataset, biogeophysical mechanisms due to land cover changes tend to decrease global air temperature by 0.26°C, while biogeochemical mechanisms act to warm the climate by 0.18°C. The net effect on climate is negligible on a global scale, but pronounced over the land in the temperate and high northern latitudes where a cooling due to an increase in land surface albedo offsets the warming due to land-use CO2 emissions. Land cover changes led to estimated increases in atmospheric CO2 of between 22 and 43 ppmv. Over the entire period 1800,2000, simulated ,13CO2 with HYDE compares most favourably with ice core during 1850,1950 and Cape Grim data, indicating preference of earlier land clearance in HYDE over R&F. In relative terms, land cover forcing corresponds to 25,49% of the observed growth in atmospheric CO2. This contribution declined from 36,60% during 1850,1960 to 4,35% during 1960,2000. CLIMBER-2-LPJ simulates the land cover contribution to atmospheric CO2 growth to decrease from 68% during 1900,1960 to 12% in the 1980s. Overall, our simulations show a decline in the relative role of land cover changes for atmospheric CO2 increase during the last 150 years. [source]


    Process-oriented catchment modelling and multiple-response validation

    HYDROLOGICAL PROCESSES, Issue 2 2002
    S. Uhlenbrook
    Abstract The conceptual rainfall runoff model TAC (tracer-aided catchment model) has been developed based on the experimental results of tracer hydrological investigations at the mountainous Brugga and Zastler basins (40 and 18·4 km2). The model contains a physically realistic description of the runoff generation, which includes seven unit types each with characteristic dominating runoff generation processes. These processes are conceptualized by different linear and non-linear reservoir concepts. The model is applied to a period of 3·2 years on a daily time step with good success. In addition, an extensive model validation procedure was executed. Therefore, additional information (i.e. runoff in subbasins and a neighbouring basin, tracer concentrations and calculated runoff components) was used besides the simulated discharge of the basin investigated. This study shows the potential of tracer data for hydrological modelling. On the one hand, they are good tools to investigate the runoff generation processes. This is the basis for developing more realistic conceptualizations of the runoff generation routine. On the other hand, tracer data can serve as multi-response data to assess and validate a model. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Finite element modelling of geared multi-body system

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2002
    Yong Wang
    Abstract A dynamic model for geared multi-body system containing gear, bar and shaft is proposed and a new gear element is particularly developed based on a specific finite element theory. The gear element can take into account time-variant meshing stiffness, the gear errors, and the couplings between the torsional and the lateral vibrations of gears. The accuracy and reliability of the gear element are confirmed by comparing the simulated with the experimental results of rotational vibration accelerations. A gear,bar mechanism composing one sun gear, one planetary gear and two bars is simulated dynamically. The influences of non-uniform gear speed, time-variant meshing stiffness and bar stiffness on the dynamic behaviours are investigated. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Direct numerical simulation of low Reynolds number flows in an open-channel with sidewalls

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 8 2010
    Younghoon Joung
    Abstract A direct numerical simulation of low Reynolds number turbulent flows in an open-channel with sidewalls is presented. Mean flow and turbulence structures are described and compared with both simulated and measured data available from the literature. The simulation results show that secondary flows are generated near the walls and free surface. In particular, at the upper corner of the channel, a small vortex called inner secondary flows is simulated. The results show that the inner secondary flows, counter-rotating to outer secondary flows away from the sidewall, increase the shear velocity near the free surface. The secondary flows observed in turbulent open-channel flows are related to the production of Reynolds shear stress. A quadrant analysis shows that sweeps and ejections are dominant in the regions where secondary flows rush in toward the wall and eject from the wall, respectively. A conditional quadrant analysis also reveals that the production of Reynolds shear stress and the secondary flow patterns are determined by the directional tendency of the dominant coherent structures. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    In vitro stability of triclosan in dentifrice under simulated use condition

    INTERNATIONAL JOURNAL OF COSMETIC SCIENCE, Issue 5 2007
    Z. Hao
    Synopsis Triclosan has been formulated into a dentifrice at a 0.3% level to enhance the antibacterial function of the dentifrice, to improve oral health and to decrease the daily malodor inside the mouth cavity. The hypothesis that chloroform may be generated from triclosan when contacted with chlorinated drinking water has challenged our guarantee of safe use of triclosan in oral care products, especially in Colgate Total® toothpaste. Currently, there was no available analytical method to detect chloroform levels under the use conditions expected during daily tooth brushing. To fill this gap and to continue guaranteeing that our customers can safely use Colgate Total® toothpaste products, a gas chromatography,single ion monitoring,mass spectrometry method for detecting chloroform in artificial saliva media has been developed. The limit of detection (LOD) and limit of quantitation are about 41 and 130 ppb, respectively. This LOD level is lower than the current Environmental Protection Agency trihalomethanes contamination limit, which is required for our daily drink water. Our in vitro study indicated that Colgate Total® does not form detectable chloroform levels (41 ppb) over the range of expected consumer-brushing times while using normal chlorinated drinking water. Résumé Un dentifrice contenant une concentration de 0.3% de Triclosan a été formulé dans le but de renforcer les propriétés antibactériennes du produit, d'améliorer l'hygiène buccale et de diminuer les mauvaises odeurs quotidiennes de la cavité buccale. L'hypothèse que du chloroforme peut se former à partir du Triclosan au contact de l'eau douce chlorée jette un doute sur la garantie de sécurité d'utilisation du Triclosan dans les produits oraux, en particulier dans la pâte dentifrice Colgate Total®. On ne dispose actuellement d'aucune méthode analytique permettant de détecter le chloroforme dans des conditions habituelles d'utilisation qui correspondent au brossage quotidien des dents. Pour y remédier et pour continuer à garantir à nos clients la sécurité d'utilisation de la pâte dentifrice Colgate Total®, une méthode GC-SIM-MS capable de détecter le chloroforme dans une salive artificielle a été développée. La limite de détection (LOD) et la limite de quantification (LOQ) sont respectivement d'environ de 41 et 130 ppb. Cette valeur de LOD est inférieure à la limite de contamination en trihalométhane requise pour l'eau douce journalière par l'Environnemental Protection Agency (EPA). Notre étude in vitro montre que Colgate Total® ne génère pas de chloroforme à une concentration détectable (41 ppb) pendant la durée requise d'un brossage avec l'utilisation d'eau potable chlorée. [source]


    Learning weighted linguistic rules to control an autonomous robot

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 3 2009
    M. Mucientes
    A methodology for learning behaviors in mobile robotics has been developed. It consists of a technique to automatically generate input,output data plus a genetic fuzzy system that obtains cooperative weighted rules. The advantages of our methodology over other approaches are that the designer has to choose the values of only a few parameters, the obtained controllers are general (the quality of the controller does not depend on the environment), and the learning process takes place in simulation, but the controllers work also on the real robot with good performance. The methodology has been used to learn the wall-following behavior, and the obtained controller has been tested using a Nomad 200 robot in both simulated and real environments. © 2009 Wiley Periodicals, Inc. [source]


    A bandpass filter with adjustable bandwidth and predictable transmission zeros

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 2 2010
    Li Zhu
    Abstract In this article, a microstrip bandpass filter with an adjustable bandwidth and predictable transmission zeros is proposed. The proposed filter is implemented by combining two hairpin edge-coupled resonators with interdigital capacitors. Compared to typical edge-coupled filters, the proposed filter provides a wider bandwidth resulting from a higher coupling strength between its resonators. To further increase the coupling and consequently the bandwidth, a pair of etched slots in the ground plane is used. By adjusting the geometrical parameters of the interdigital capacitors and etched slots, the bandwidth can be easily adjusted. The filter features two transmission zeros, which are determined by means of the semi-analytical model developed as part of this work. Furthermore, the proposed filters can be cascaded to obtain a sharper cutoff frequency response. Frequency responses of the filters from measurements are in good agreement with those simulated using IE3D in the 5,9 GHz range. © 2009 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010. [source]


    Enabling a compact model to simulate the RF behavior of MOSFETs in SPICE

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 3 2005
    Reydezel Torres-Torres
    Abstract A detailed methodology for implementing a MOSFET model valid to perform RF simulations is described in this article. Since the SPICE-like simulation programs are used as a standard tool for integrated circuit (IC) design, the resulting model is oriented for its application under the SPICE environment. The core of the proposed model is the popular BSIM3v3, but in this model the RF effects are taken into account by means of extrinsic lumped elements. Good agreement between the simulated and measured small-signal S -parameter data is achieved for a 0.18-,m channel-length MOSFET, thus validating the proposed model. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE 15, 2005. [source]


    Interactive controls of herbivory and fluvial dynamics on landscape vegetation patterns on the Tanana River floodplain, interior Alaska

    JOURNAL OF BIOGEOGRAPHY, Issue 9 2007
    Lem G. Butler
    Abstract Aim, We examined the interactive effects of mammalian herbivory and fluvial dynamics on vegetation dynamics and composition along the Tanana River in interior Alaska. Location, Model parameters were obtained from field studies along the Tanana River, Alaska between Fairbanks (64°50.50, N, 147°43.30, W) and Manley Hot Springs (65°0.0, N, 150°36.0, W). Methods, We used a spatially explicit model of landscape dynamics (ALFRESCO) to simulate vegetation changes on a 1-year time-step. The model was run for 250 years and was replicated 100 times. Results, Increases in herbivory decreased the proportion of early successional vegetation and increased the proportion of late successional vegetation on the simulated landscape. Erosion and accretion worked as antagonists to herbivory, increasing the amount of early successional vegetation and decreasing the amount of late successional vegetation. However, the interactive effects of herbivory and erosion/accretion were especially important in determining system response, particularly in early seral vegetation types. High erosion rates, when coupled with low herbivory, greatly increased the proportion of willow on the landscape. When coupled with high herbivory, however, they greatly increased the proportion of alder on the landscape. At low levels of herbivory, alder abundance peaked at intermediate levels of erosion/accretion. Main conclusions, Neither erosion/accretion nor herbivory produced consistent landscape patterns that could be predicted independently of the other. These findings underscore the importance of the interactive effects of biotic and abiotic disturbances in shaping large-scale landscape vegetation patterns in boreal floodplain ecosystems , systems traditionally thought to be driven primarily by abiotic disturbance alone. [source]


    Fast principal component analysis of large data sets based on information extraction

    JOURNAL OF CHEMOMETRICS, Issue 11 2002
    F. Vogt
    Abstract Principal component analysis (PCA) and principal component regression (PCR) are routinely used for calibration of measurement devices and for data evaluation. However, their use is hindered in some applications, e.g. hyperspectral imaging, by excessive data sets that imply unacceptable calculation time. This paper discusses a fast PCA achieved by a combination of data compression based on a wavelet transformation and a spectrum selection method prior to the PCA itself. The spectrum selection step can also be applied without previous data compression. The calculation speed increase is investigated based on original and compressed data sets, both simulated and measured. Two different data sets are used for assessment of the new approach. One set contains 65,536 synthetically generated spectra at four different noise levels with 256 measurement points each. Compared with the conventional PCA approach, these examples can be accelerated 20 times. Evaluation errors of the fast method were calculated and found to be comparable with those of the conventional approach. Four experimental spectra sets of similar size are also investigated. The novel method outperforms PCA in speed by factors of up to 12, depending on the data set. The principal components obtained by the novel algorithm show the same ability to model the measured spectra as the conventional time-consuming method. The acceleration factors also depend on the possible compression; in particular, if only a small compression is feasible, the acceleration lies purely with the novel spectrum selection step proposed in this paper. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Generalizability in Item Response Modeling

    JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 2 2007
    Derek C. Briggs
    An approach called generalizability in item response modeling (GIRM) is introduced in this article. The GIRM approach essentially incorporates the sampling model of generalizability theory (GT) into the scaling model of item response theory (IRT) by making distributional assumptions about the relevant measurement facets. By specifying a random effects measurement model, and taking advantage of the flexibility of Markov Chain Monte Carlo (MCMC) estimation methods, it becomes possible to estimate GT variance components simultaneously with traditional IRT parameters. It is shown how GT and IRT can be linked together, in the context of a single-facet measurement design with binary items. Using both simulated and empirical data with the software WinBUGS, the GIRM approach is shown to produce results comparable to those from a standard GT analysis, while also producing results from a random effects IRT model. [source]


    A stochastic modelling approach to describing the dynamics of an experimental furunculosis epidemic in Chinook salmon, Oncorhynchus tshawytscha (Walbaum)

    JOURNAL OF FISH DISEASES, Issue 2 2007
    H Ogut
    Abstract A susceptible-infected-removed (SIR) stochastic model was compared to a susceptible-latent-infectious-removed (SLIR) stochastic model in terms of describing and capturing the variation observed in replicated experimental furunculosis epidemics, caused by Aeromonas salmonicida. The epidemics had been created by releasing a single infectious fish into a group of susceptible fish (n = 43) and progress of the epidemic was observed for 10 days. This process was replicated in 70 independent groups. The two stochastic models were run 5000 times and after every run and every 100 runs, daily mean values of each compartment were compared to the observed data. Both models, the SIR model (R2 = 0.91), and the SLIR model (R2 = 0.90) were successful in predicting the number of fish in each category at each time point in the experimental data. Moreover, between-replicate variability in the stochastic model output was similar to between-replicate variability in the experimental data. Generally, there was little change in the goodness of fit (R2) after 200 runs in the SIR model whereas 500 runs were necessary to have stable predictions with the SLIR model. In the SIR model, on an individual replicate basis, ,80% of 5000 simulated replicates had R2 = 0.7 and above, whereas this ratio was slightly higher (82%) with the SLIR model. In brief, both models were equally effective in predicting the observed data and its variance but the SLIR model was advantageous because it differentiated the latent, i.e. infected but not having the ability to discharge pathogen, from the infectious fish. [source]


    ASSESSING ABSORBABILITY OF BIOACTIVE COMPONENTS IN ALOE USING IN VITRO DIGESTION MODEL WITH HUMAN INTESTINAL CELL

    JOURNAL OF FOOD BIOCHEMISTRY, Issue 2 2010
    SOON-MI SHIM
    ABSTRACT This study used a simulated in vitro digestion model coupled with caco-2 cell to assess the digestive stability and absorption of aloin, aloe-emodin and aloenin A. Aloenin A and aloe-emodin were stable and entirely recovered during simulated digestion, but 50% of aloin was lost. Approximately 53.2, 7.3 and 28.7% of aloe-emodin, aloenin A and aloin, respectively, was transported into both apical and basolateral compartments after 1 h incubation in caco-2 cell. The involvement of several transporter proteins for aloin and aloenin A was examined. An inhibitor of SGLT1 on apical surface (phloridzin) or that of GLUT2 on basolateral membrane (cytochalasin B) reduced the absorption of aloin by 40 or 60%, respectively, indicating that aloin is likely to be a partial substrate of SGLT1. In the presence of an efflux transporter inhibitor (verapamil), the transport of aloenin A through an intentinal apical membrane increased up to 2.1 times compared with the control (without verapamil). PRACTICAL APPLICATIONS Our results on both digestive stability and intestinal absorption characteristics of bioactive components in aloe could be of helpful information for promoting its bioavailability. The in vitro technique described in this study provides a rapid and cost-effective alternative for predicting bioavailability of biomarkers in aloe functional food. [source]


    Least-Square Deconvolution: A Framework for Interpreting Short Tandem Repeat Mixtures,

    JOURNAL OF FORENSIC SCIENCES, Issue 6 2006
    Tsewei Wang Ph.D.
    ABSTRACT: Interpreting mixture short tandem repeat DNA data is often a laborious process, involving trying different genotype combinations mixed at assumed DNA mass proportions, and assessing whether the resultant is supported well by the relative peak-height information of the mixture sample. If a clear pattern of major,minor alleles is apparent, it is feasible to identify the major alleles of each locus and form a composite genotype profile for the major contributor. When alleles are shared between the two contributors, and/or heterozygous peak imbalance is present, it becomes complex and difficult to deduce the profile of the minor contributor. The manual trial and error procedures performed by an analyst in the attempt to resolve mixture samples have been formalized in the least-square deconvolution (LSD) framework reported here for two-person mixtures, with the allele peak height (or area) information as its only input. LSD operates on the peak-data information of each locus separately, independent of all other loci, and finds the best-fit DNA mass proportions and calculates error residual for each possible genotype combination. The LSD mathematical result for all loci is then to be reviewed by a DNA analyst, who will apply a set of heuristic interpretation guidelines in an attempt to form a composite DNA profile for each of the two contributors. Both simulated and forensic peak-height data were used to support this approach. A set of heuristic guidelines is to be used in forming a composite profile for each of the mixture contributors in analyzing the mathematical results of LSD. The heuristic rules involve the checking of consistency of the best-fit mass proportion ratios for the top-ranked genotype combination case among all four- and three-allele loci, and involve assessing the degree of fit of the top-ranked case relative to the fit of the second-ranked case. A different set of guidelines is used in reviewing and analyzing the LSD mathematical results for two-allele loci. Resolution of two-allele loci is performed with less confidence than for four- and three-allele loci. This paper gives a detailed description of the theory of the LSD methodology, discusses its limitations, and the heuristic guidelines in analyzing the LSD mathematical results. A 13-loci sample case study is included. The use of the interpretation guidelines in forming composite profiles for each of the two contributors is illustrated. Application of LSD in this case produced correct resolutions at all loci. Information on obtaining access to the LSD software is also given in the paper. [source]


    Comparison of TCA and ICA techniques in fMRI data processing

    JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 4 2004
    Xia Zhao MS
    Abstract Purpose To make a quantitative comparison of temporal cluster analysis (TCA) and independent component analysis (ICA) techniques in detecting brain activation by using simulated data and in vivo event-related functional MRI (fMRI) experiments. Materials and Methods A single-slice MRI image was replicated 150 times to simulate an fMRI time series. An event-related brain activation pattern with five different levels of intensity and Gaussian noise was superimposed on these images. Maximum contrast-to-noise ratio (CNR) of the signal change ranged from 1.0 to 2.0 by 0.25 increments. In vivo visual stimulation fMRI experiments were performed on a 1.9 T magnet. Six human volunteers participated in this study. All imaging data were analyzed using both TCA and ICA methods. Results Both simulated and in vivo data have shown that no statistically significant difference exists in the activation areas detected by both ICA and TCA techniques when CNR of fMRI signal is larger than 1.75. Conclusion TCA and ICA techniques are comparable in generating functional brain maps in event-related fMRI experiments. Although ICA has richer features in exploring the spatial and temporal information of the functional images, the TCA method has advantages in its computational efficiency, repeatability, and readiness to average data from group subjects. J. Magn. Reson. Imaging 2004;19:397,402. © 2004 Wiley-Liss, Inc. [source]


    Melatonin Implants Disrupt Developmental Synchrony Regulated By Flexible Interval Timers

    JOURNAL OF NEUROENDOCRINOLOGY, Issue 11 2003
    M. R. Gorman
    Abstract Siberian hamsters born into short daylengths near the end of the breeding season are reproductively inhibited from birth and delay gonadal maturation until the following spring. This vernal transition to a reproductive phenotype coincides with an abrupt increase in body weight, and both processes are triggered by an interval timing mechanism that becomes insensitive, or refractory, to short-day inhibition. It was previously demonstrated that hamsters born into simulated natural photoperiods in early August became photorefractory at later ages than hamsters born into September photoperiods. As a consequence of flexibility in the duration programmed by the interval timer, development of seasonal birth cohorts was synchronous with respect to the calendar date simulated by laboratory photoperiod. In the present study, hamsters were born into simulated August or September photoperiods. Hamsters from each cohort were given removable constant release melatonin implants to reversibly obscure the neuroendocrine representation of daylength between 3 and 9 weeks or 9,15 weeks of age. When control hamsters were given beeswax capsules throughout, August-born males were approximately 6 weeks older than September males at the onset of photorefractoriness as assessed by accelerated increases in body weight and testicular size. Females exhibited the same pattern in body weight. These measures were synchronized with respect to calendar date. Synchronization of cohorts was disrupted by melatonin capsules from 3,9 weeks of age but not by later implants. Melatonin implants altered synchronization by influencing the developmental trajectory of September-born hamsters without influencing the August cohort. These results demonstrate that the function of the interval timer underlying photorefractoriness is influenced by photoperiod and by melatonin. The endogenous pattern of melatonin signals adjusts the duration measured by the interval timer to insure that developmental milestones of seasonal cohorts are synchronized with environmental conditions. [source]


    Effect of direct retainer and major connector designs on RPD and abutment tooth movement dynamics

    JOURNAL OF ORAL REHABILITATION, Issue 11 2008
    H. ITOH
    Summary, Designs of removable partial dentures are suggested to affect the mobility of abutment teeth and removable partial denture (RPD) during oral functions. This study aimed to examine the effect of direct retainer and major connector designs on RPD dynamics under simulated loading. Six different Kennedy class II maxillary RPDs were fabricated on a maxillary model. These dentures involved 3 different direct retainers (wrought-wire clasp, RPA clasp, and conical crown telescopic retainer) and 2 different major connectors (Co-Cr major connector and heat-cured acrylic resin with a metal strengthener). Using an experimental model with simulated periodontal ligaments and mucosa that were fabricated using silicone impression material, three-dimensional displacements of the RPDs were measured under a simulated 30-N loading with a displacement transducer type M-3. Significant effects of "direct retainer design" on bucco-palatal displacements and "major connector" on mesio-distal displacements were revealed by 2 × 3 two-way analysis of variance of abutment teeth movements (P < 0·001 and P = 0·002, respectively). Additionally, analysis of variance of RPD displacements revealed significant effects of "direct retainer design" on corono-apical displacements and "major connector" on mesio-distal displacements (P = 0·001 and P = 0·028, respectively). Rigid direct retainers and rigid major connectors decrease the movements of both abutment tooth and RPD. [source]


    Crystal structure prediction for eniluracil

    JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 8 2001
    Mark Sacchetti
    Abstract State-of-the-art molecular modeling tools were used to predict the crystal structure of eniluracil, a compound for which it has not been possible to grow a single crystal. Two methods were used, one that incorporates molecular structure and powder X-ray diffraction data and another that employs molecular structure and lattice energy calculations into the search algorithm. Two structures were identified, one with P21/c and the other with P21 symmetry, both of which are consistent with the infrared and Raman spectra. A detailed analysis of the simulated and experimental powder X-ray diffraction patterns indicates that the P21/c structure is the best representation of the crystal structure. © 2001 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 90:1049,1055, 2001 [source]


    A new class of models for bivariate joint tails

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2009
    Alexandra Ramos
    Summary., A fundamental issue in applied multivariate extreme value analysis is modelling dependence within joint tail regions. The primary focus of this work is to extend the classical pseudopolar treatment of multivariate extremes to develop an asymptotically motivated representation of extremal dependence that also encompasses asymptotic independence. Starting with the usual mild bivariate regular variation assumptions that underpin the coefficient of tail dependence as a measure of extremal dependence, our main result is a characterization of the limiting structure of the joint survivor function in terms of an essentially arbitrary non-negative measure that must satisfy some mild constraints. We then construct parametric models from this new class and study in detail one example that accommodates asymptotic dependence, asymptotic independence and asymmetry within a straightforward parsimonious parameterization. We provide a fast simulation algorithm for this example and detail likelihood-based inference including tests for asymptotic dependence and symmetry which are useful for submodel selection. We illustrate this model by application to both simulated and real data. In contrast with the classical multivariate extreme value approach, which concentrates on the limiting distribution of normalized componentwise maxima, our framework focuses directly on the structure of the limiting joint survivor function and provides significant extensions of both the theoretical and the practical tools that are available for joint tail modelling. [source]


    Modelling multivariate volatilities via conditionally uncorrelated components

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2008
    Jianqing Fan
    Summary., We propose to model multivariate volatility processes on the basis of the newly defined conditionally uncorrelated components (CUCs). This model represents a parsimonious representation for matrix-valued processes. It is flexible in the sense that each CUC may be fitted separately with any appropriate univariate volatility model. Computationally it splits one high dimensional optimization problem into several lower dimensional subproblems. Consistency for the estimated CUCs has been established. A bootstrap method is proposed for testing the existence of CUCs. The methodology proposed is illustrated with both simulated and real data sets. [source]


    An Efficient Taper for Potentially Overdifferenced Long-memory Time Series

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2000
    Clifford M. Hurvich
    We propose a new complex-valued taper and derive the properties of a tapered Gaussian semiparametric estimator of the long-memory parameter d, (,0.5, 1.5). The estimator and its accompanying theory can be applied to generalized unit root testing. In the proposed method, the data are differenced once before the taper is applied. This guarantees that the tapered estimator is invariant with respect to deterministic linear trends in the original series. Any detrimental leakage effects due to the potential noninvertibility of the differenced series are strongly mitigated by the taper. The proposed estimator is shown to be more efficient than existing invariant tapered estimators. Invariance to kth order polynomial trends can be attained by differencing the data k times and then applying a stronger taper, which is given by the kth power of the proposed taper. We show that this new family of tapers enjoys strong efficiency gains over comparable existing tapers. Analysis of both simulated and actual data highlights potential advantages of the tapered estimator of d compared with the nontapered estimator. [source]


    Wavelet analysis for detecting anisotropy in point patterns

    JOURNAL OF VEGETATION SCIENCE, Issue 2 2004
    Michael S. Rosenberg
    Although many methods have been proposed for analysing point locations for spatial pattern, previous methods have concentrated on clumping and spacing. The study of anisotropy (changes in spatial pattern with direction) in point patterns has been limited by lack of methods explicitly designed for these data and this purpose; researchers have been constrained to choosing arbitrary test directions or converting their data into quadrat counts and using methods designed for continuously distributed data. Wavelet analysis, a booming approach to studying spatial pattern, widely used in mathematics and physics for signal analysis, has started to make its way into the ecological literature. A simple adaptation of wavelet analysis is proposed for the detection of anisotropy in point patterns. The method is illustrated with both simulated and field data. This approach can easily be used for both global and local spatial analysis. [source]


    Knowledge-Based Tailoring of Gelatin-Based Materials by Functionalization with Tyrosine-Derived Groups

    MACROMOLECULAR RAPID COMMUNICATIONS, Issue 17 2010
    Axel Thomas Neffe
    Abstract Molecular models of gelatin-based materials formed the basis for the knowledge-based design of a physically cross-linked polymer system. The computational models with 25,wt.-% water content were validated by comparison of the calculated structural properties with experimental data and were then used as predictive tools to study chain organization, cross-link formation, and estimation of mechanical properties. The introduced tyrosine-derived side groups, desaminotyrosine (DAT) and desaminotyrosyl tyrosine (DATT), led to the reduction of the residual helical conformation and to the formation of physical net-points by ,,, interactions and hydrogen bonds. At 25,wt.-% water content, the simulated and experimentally determined mechanical properties were in the same order of magnitude. The degree of swelling in water decreased with increasing the number of inserted aromatic functions, while Young's modulus, elongation at break, and maximum tensile strength increased. [source]