Reduction Techniques (reduction + techniques)

Distribution by Scientific Domains


Selected Abstracts


Incorporating Physiological and Biochemical Mechanisms into Pharmacokinetic,Pharmacodynamic Models: A Conceptual Framework,

BASIC AND CLINICAL PHARMACOLOGY & TOXICOLOGY, Issue 1 2010
Svein G. Dahl
In general, modelling of data has the purpose (1) to describe experimental data, (2a) to reduce the amount of data resulting from an experiment, e.g. a clinical trial and (2b) to obtain the most relevant parameters, (3) to test hypotheses and (4) to make predictions within the boundaries of experimental conditions, e.g. range of doses tested (interpolation) and out of the boundaries of the experimental conditions, e.g. to extrapolate from animal data to the situation in man. Describing the drug/xenobiotic-target interaction and the chain of biological events following the interaction is the first step to build a biologically based model. This is an approach to represent the underlying biological mechanisms in qualitative and also quantitative terms, thus being inherently connected in many aspects to systems biology. As the systems biology models may contain variables in the order of hundreds connected with differential equations, it is obvious that it is in most cases not possible to assign values to the variables resulting from experimental data. Reduction techniques may be used to create a manageable model which, however, captures the biologically meaningful events in qualitative and quantitative terms. Until now, some success has been obtained by applying empirical pharmacokinetic/pharmacodynamic models which describe direct and indirect relationships between the xenobiotic molecule and the effect, including tolerance. Some of the models may have physiological components built in the structure of the model and use parameter estimates from published data. In recent years, some progress toward semi-mechanistic models has been made, examples being chemotherapy-induced myelosuppression and glucose-endogenous insulin-antidiabetic drug interactions. We see a way forward by employing approaches to bridge the gap between systems biology and physiologically based kinetic and dynamic models. To be useful for decision making, the ,bridging' model should have a well founded mechanistic basis, but being reduced to the extent that its parameters can be deduced from experimental data, however capturing the biological/clinical essential details so that meaningful predictions and extrapolations can be made. [source]


Efficient sampling and data reduction techniques for probabilistic seismic lifeline risk assessment

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2010
Nirmal Jayaram
Abstract Probabilistic seismic risk assessment for spatially distributed lifelines is less straightforward than for individual structures. While procedures such as the ,PEER framework' have been developed for risk assessment of individual structures, these are not easily applicable to distributed lifeline systems, due to difficulties in describing ground-motion intensity (e.g. spectral acceleration) over a region (in contrast to ground-motion intensity at a single site, which is easily quantified using Probabilistic Seismic Hazard Analysis), and since the link between the ground-motion intensities and lifeline performance is usually not available in closed form. As a result, Monte Carlo simulation (MCS) and its variants are well suited for characterizing ground motions and computing resulting losses to lifelines. This paper proposes a simulation-based framework for developing a small but stochastically representative catalog of earthquake ground-motion intensity maps that can be used for lifeline risk assessment. In this framework, Importance Sampling is used to preferentially sample ,important' ground-motion intensity maps, and K -Means Clustering is used to identify and combine redundant maps in order to obtain a small catalog. The effects of sampling and clustering are accounted for through a weighting on each remaining map, so that the resulting catalog is still a probabilistically correct representation. The feasibility of the proposed simulation framework is illustrated by using it to assess the seismic risk of a simplified model of the San Francisco Bay Area transportation network. A catalog of just 150 intensity maps is generated to represent hazard at 1038 sites from 10 regional fault segments causing earthquakes with magnitudes between five and eight. The risk estimates obtained using these maps are consistent with those obtained using conventional MCS utilizing many orders of magnitudes more ground-motion intensity maps. Therefore, the proposed technique can be used to drastically reduce the computational expense of a simulation-based risk assessment, without compromising the accuracy of the risk estimates. This will facilitate computationally intensive risk analysis of systems such as transportation networks. Finally, the study shows that the uncertainties in the ground-motion intensities and the spatial correlations between ground-motion intensities at various sites must be modeled in order to obtain unbiased estimates of lifeline risk. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Trivial reductions of dimensionality in the propagation of uncertainties: a physical example

ENVIRONMETRICS, Issue 1 2004
Ricardo Bolado
Abstract When performing uncertainty analysis on a mathematical model of a physical process, some coefficients of the differential equations appear as a result of elementary operations of other coefficients. It is shown in this article that variance reduction techniques should be applied on the ,final' or ,reduced' coefficients and not on the original ones, thus reducing the variance of the estimators of the parameters of the output variable distribution. We illustrate the methodology with an application to a physical problem, a radioactive contaminant transport code. A substantial variance reduction is achieved for the estimators of the distribution function, the mean and the variance of the output. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Atrial Remodeling After Mitral Valve Surgery in Patients with Permanent Atrial Fibrillation

JOURNAL OF CARDIAC SURGERY, Issue 5 2004
Fernando Hornero M.D., Ph.D.
Mitral surgery allows an immediate surgical auricular remodeling and besides in those cases in which sinus rhythm is reached, it is followed by a late remodeling. The aim of this study is to investigate the process of postoperative auricular remodeling in patients with permanent atrial fibrillation undergoing mitral surgery. Methods: In a prospective randomized trial, 50 patients with permanent atrial fibrillation and dilated left atrium, submitted to surgical mitral repair, were divided into two groups: Group I contained 25 patients with left auricular reduction and mitral surgery, and Group II contained 25 patients with isolated valve surgery. Both groups were considered homogeneous in the preoperative assessment. Results: After a mean follow-up of 31 months, 46% of patients included in Group I versus 18% of patients included in Group II restarted sinus rhythm (p = 0.06). An auricular remodeling with size regression occurred in those patients who recovered from sinus rhythm, worthy of remark in Group II (,10.8% of left auricular volume reduction in Group I compared to ,21.5% in Group II; p < 0.05). A new atrial enlargement took place in those patients who remained with atrial fibrillation (+16.8% left auricular volume in Group I vs. +8.4% in Group II; p < 0.05). Conclusions: Mitral surgery produces an atrial postoperative volume that decrease especially when reduction techniques are employed. Late left atrial remodeling depended on the type of atrial rhythm and postoperative surgical volume. [source]


Optimal operation of GaN thin film epitaxy employing control vector parametrization

AICHE JOURNAL, Issue 4 2006
Amit Varshney
Abstract An approach that links nonlinear model reduction techniques with control vector parametrization-based schemes is presented, to efficiently solve dynamic constraint optimization problems arising in the context of spatially-distributed processes governed by highly-dissipative nonlinear partial-differential equations (PDEs), utilizing standard nonlinear programming techniques. The method of weighted residuals with empirical eigenfunctions (obtained via Karhunen-Loève expansion) as basis functions is employed for spatial discretization together with control vector parametrization formulation for temporal discretization. The stimulus for the earlier approach is provided by the presence of low order dominant dynamics in the case of highly dissipative parabolic PDEs. Spatial discretization based on these few dominant modes (which are elegantly captured by empirical eigenfunctions) takes into account the actual spatiotemporal behavior of the PDE which cannot be captured using finite difference or finite element techniques with a small number of discretization points/elements. The proposed approach is used to compute the optimal operating profile of a metallorganic vapor-phase epitaxy process for the production of GaN thin films, with the objective to minimize the spatial nonuniformity of the deposited film across the substrate surface by adequately manipulating the spatiotemporal concentration profiles of Ga and N precursors at the reactor inlet. It is demonstrated that the reduced order optimization problem thus formulated using the proposed approach for nonlinear order reduction results in considerable savings of computational resources and is simultaneously accurate. It is demonstrated that by optimally changing the precursor concentration across the reactor inlet it is possible to reduce the thickness nonuniformity of the deposited film from a nominal 33% to 3.1%. © 2005 American Institute of Chemical Engineers AIChE J, 2006 [source]


A theory of statistical models for Monte Carlo integration

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2003
A. Kong
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers. [source]


Clutter reduction in synthetic aperture radar images with statistical modeling: An application to MSTAR data

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 6 2008
Sevket Demirci
Abstract In this article, an application of clutter modeling and reduction techniques to synthetic aperture radar (SAR) images of moving and stationary target acquisition and recognition data is presented. Statistical modeling of the clutter signal within these particular SAR images is demonstrated. Lognormal, Weibull, and K-distribution models are analyzed for the amplitude distribution of high-resolution land clutter data. Higher-order statistics (moments and cumulants) are utilized to estimate the appropriate statistical distribution models for the clutter. Also, Kolmogorov-Smirnov (K-S) goodness-of-fit test is employed to validate the accuracy of the selected models. With the use of the determined clutter model, constant false-alarm rate detection algorithm is applied to the SAR images of several military targets. Resultant SAR images obtained by using the proposed method show that target signatures are reliably differentiated from the clutter background. © 2008 Wiley Periodicals, Inc. Microwave Opt Technol Lett 50: 1514,1520, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.23413 [source]


Measurement of a reciprocal four-port transmission line structure using the 16-term error model

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 7 2007
Yun Zhang
Abstract A new method to measure reciprocal four-port structures, using a 16-term error model, is presented. The measurement is based on 5 two-port calibration standards connected to two of the ports, while the network analyzer is connected to the two remaining ports. Least-squares-fit data reduction techniques are used to lower error sensitivity. The effect of connectors is deembedded using closed-form equations. © 2007 Wiley Periodicals, Inc. Microwave Opt Technol Lett 49: 1511,1515, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.22498 [source]


A general strategic capacity planning model under demand uncertainty

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 2 2006
Woonghee Tim Huh
Abstract Capacity planning decisions affect a significant portion of future revenue. In equipment intensive industries, these decisions usually need to be made in the presence of both highly volatile demand and long capacity installation lead times. For a multiple product case, we present a continuous-time capacity planning model that addresses problems of realistic size and complexity found in current practice. Each product requires specific operations that can be performed by one or more tool groups. We consider a number of capacity allocation policies. We allow tool retirements in addition to purchases because the stochastic demand forecast for each product can be decreasing. We present a cluster-based heuristic algorithm that can incorporate both variance reduction techniques from the simulation literature and the principles of a generalized maximum flow algorithm from the network optimization literature. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2006 [source]


Revegetation Methods for High-Elevation Roadsides at Bryce Canyon National Park, Utah

RESTORATION ECOLOGY, Issue 2 2004
S. L. Petersen
Abstract Establishment of native plant populations on disturbed roadsides was investigated at Bryce Canyon National Park (BCNP) in relation to several revegetation and seedbed preparation techniques. In 1994, the BCNP Rim Road (2,683,2,770 m elevation) was reconstructed resulting in a 23.8-ha roadside disturbance. Revegetation comparisons included the influence of fertilizer on plant establishment and development, the success of indigenous versus commercial seed, seedling response to microsites, methods of erosion control, and shrub transplant growth and survival. Plant density, cover, and biomass were measured 1, 2, and 4 years after revegetation implementation (1995,1998). Seeded native grass cover and density were the highest on plots fertilized with nitrogen and phosphorus, but by the fourth growing season, differences between fertilized and unfertilized plots were minimal. Fertilizers may facilitate more rapid establishment of seeded grasses following disturbance, increasing soil cover and soil stability on steep and unstable slopes. However the benefit of increased soil nutrients favored few of the desired species resulting in lower species richness over time compared to unfertilized sites. Elymus trachycaulus (slender wheatgrass) plants raised from indigenous seed had higher density and cover than those from a commercial seed source 2 and 4 years after sowing. Indigenous materials may exhibit slow establishment immediately following seeding, but they will likely persist during extreme climatic conditions such as cold temperatures and relatively short growing seasons. Seeded grasses established better near stones and logs than on adjacent open microsites, suggesting that a roughened seedbed created before seeding can significantly enhance plant establishment. After two growing seasons, total grass cover between various erosion-control treatments was similar indicating that a variety of erosion reduction techniques can be utilized to reduce erosion. Finally shrub transplants showed minimal differential response to fertilizers, water-absorbing gels, and soil type. Simply planting and watering transplants was sufficient for the greatest plant survival and growth. [source]


Applications and Extensions of Chao's Moment Estimator for the Size of a Closed Population

BIOMETRICS, Issue 4 2007
Louis-Paul Rivest
Summary This article revisits Chao's (1989, Biometrics45, 427,438) lower bound estimator for the size of a closed population in a mark,recapture experiment where the capture probabilities vary between animals (model Mh). First, an extension of the lower bound to models featuring a time effect and heterogeneity in capture probabilities (Mth) is proposed. The biases of these lower bounds are shown to be a function of the heterogeneity parameter for several loglinear models for Mth. Small-sample bias reduction techniques for Chao's lower bound estimator are also derived. The application of the loglinear model underlying Chao's estimator when heterogeneity has been detected in the primary periods of a robust design is then investigated. A test for the null hypothesis that Chao's loglinear model provides unbiased abundance estimators is provided. The strategy of systematically using Chao's loglinear model in the primary periods of a robust design where heterogeneity has been detected is investigated in a Monte Carlo experiment. Its impact on the estimation of the population sizes and of the survival rates is evaluated in a Monte Carlo experiment. [source]


Improving MCM-41 as a Nitrosamines Trap through a One-Pot Synthesis

CHEMISTRY - AN ASIAN JOURNAL, Issue 8 2007
Jia Hui Xu
Abstract Copper oxide was incorporated into MCM-41 by a one-pot synthesis under acidic conditions to prepare a new mesoporous nitrosamines trap for protection of the environment. The resulting composites were characterized by XRD, N2 adsorption,desorption, and H2 temperature-programmed reduction techniques, and their adsorption capabilities were assessed in the gaseous adsorption of N -nitrosopyrrolidine (NPYR). The adsorption isotherms were consistent with the Freundlich equation. The copper salt was deposited onto MCM-41 during the evaporation stage and was fixed on the host in the calcination process that followed. MCM-41 was able to capture NPYR in air below 373,K but not at 453,K. Loading of copper oxide on MCM-41 greatly improved its adsorption capability at elevated temperatures. The influence of the incorporation of copper into MCM-41 samples and the adsorption behavior of these samples are discussed in detail. [source]


Factors influencing management and comparison of outcomes in paediatric intussusceptions

ACTA PAEDIATRICA, Issue 8 2007
A K Saxena
Abstract Aim: This study aims to compare management strategy and outcomes of paediatric ileocolic intussusceptions (ICI) versus small-bowel intussusceptions (SBI). Methods: Hospital charts of patients with intussusceptions between January 1999 and June 2006 were reviewed retrospectively. Results: A total of 135 patients with the diagnosis of intussusceptions were found in the database. In 111 patients the diagnosis was confirmed using ultrasound. The median age of the patients was 2.25 years (range 9 weeks,10 years). ICI were documented in 83 patients (74.8%) and SBI in 28 (25.2%). Spontaneous reductions were observed in 11 of 83 (13.3%) ICI and 18 of 28 (64.3%) SBI. Pneumatic reductions were attempted and were successful in 61 of 67 (91%) ICI and 6 of 7 (85.7%) SBI. Surgery was performed in 11 of 83 (13.3%) ICI and 4 of 28 (14.3%) SBI; with 2 of 83 (2.4%) ICI and 3 of 28 (10.7%) SBI patients requiring bowel resections. The median age of patients requiring surgery was 9 months in ICI and 6 years in SBI. Conclusion: There are differences in ICI and SBI with regard to spontaneous reductions, and bowel resection, and age with regard to surgery and bowel resection. The treatment efficacy depends on time of presentation, intussusception type, pathologic lead points, ultrasound/colour Doppler interpretation and expertise in reduction techniques. [source]