Scheme Used (scheme + used)

Distribution by Scientific Domains


Selected Abstracts


Role of Transthoracic Echocardiography in Atrial Fibrillation

ECHOCARDIOGRAPHY, Issue 4 2000
RICHARD W. ASINGER M.D.
Atrial fibrillation is a major clinical problem that is predicted to be encountered more frequently as the population ages. The clinical management of atrial fibrillation has become increasingly complex as new therapies and strategies have become available for ventricular rate control, conversion to sinus rhythm, maintenance of sinus rhythm, and prevention of thromboembolism. Clinical and transthoracic echocardiographic features are important in determining etiology and directing therapy for atrial fibrillation. Left atrial size, left ventricular wall thickness, and left ventricular function have independent predictive value for determining the risk of developing atrial fibrillation. Left atrial size may have predictive value in determining the success of cardioversion and maintaining sinus rhythm in selected clinical settings but has less value in the most frequently encountered group, patients with nonvalvular atrial fibrillation, in whom the duration of atrial fibrillation is the most important feature. When selecting pharmacological agents to control ventricular rate, convert to sinus rhythm, and maintain normal sinus rhythm, transthoracic echocardiography (TTE) allows noninvasive evaluation of left ventricular function and hence guides management. The combination of clinical and transthoracic echocardiographic features also allows risk stratification for thromboembolism and hemorrhagic complications in atrial fibrillation. High-risk clinical features for thromboembolism supported by epidemiological observations, results of randomized clinical trials, and meta-analyses include rheumatic valvular heart disease, prior thromboembolism, congestive heart failure, hypertension, older (> 75 years old) women, and diabetes. Small series of cases also suggest those with hyperthyroidism and hypertrophic cardiomyopathy are at high risk. TTE plays a unique role in confirming or discovering high-risk features such as rheumatic valvular disease, hypertrophic cardiomyopathy, and decreased left ventricular function. Validation of the risk stratification scheme used in the Stroke Prevention in Atrial Fibrillation-III trial is welcomed by clinicians who are faced daily with balancing the benefit and risks of anticoagulation to prevent thromboembolism inpatients with atrial fibrillation. [source]


Properties of case/pseudocontrol analysis for genetic association studies: Effects of recombination, ascertainment, and multiple affected offspring

GENETIC EPIDEMIOLOGY, Issue 3 2004
Heather J. Cordell
Abstract The case/pseudocontrol approach is a general framework for family-based association analysis, incorporating several previously proposed methods such as the transmission/disequilibrium test and log-linear modelling of parent-of-origin effects. In this report, I examine the properties of methods based on a case/pseudocontrol approach when applied to a linked marker rather than (or in addition to) the true disease locus or loci, and when applied to sibships that have been ascertained on, or that may simply contain, multiple affected sibs. Through simulations and analytical calculations, I show that the expected values of the observed relative risk parameters (estimating quantities such as effects due to a child's own genotype, maternal genotype, and parent-of-origin) depend crucially on the ascertainment scheme used, as well as on whether there is non-negligible recombination between the true disease locus and the locus under study. In the presence of either recombination or ascertainment on multiple affected offspring, methods based on conditioning on parental genotypes are shown to give unbiased genotype relative risk estimates at the true disease locus (or loci) but biased estimates of population genotype relative risks at a linked marker, suggesting that the resulting estimates may be misleading when used to predict the power of future studies. Methods that allow for exchangeability of parental genotypes are shown (in the presence of either recombination or ascertainment on multiple affected offspring) to produce false-positive evidence of maternal genotype effects when there are true parent-of-origin or mother-child interaction effects, even when analyzing the true locus. These results suggest that care should be taken in both the interpretation and application of parameter estimates obtained from family-based genetic association studies. © 2004 Wiley-Liss, Inc. [source]


A structural optimization method based on the level set method using a new geometry-based re-initialization scheme

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2010
Shintaro Yamasaki
Abstract Structural optimization methods based on the level set method are a new type of structural optimization method where the outlines of target structures can be implicitly represented using the level set function, and updated by solving the so-called Hamilton,Jacobi equation based on a Eulerian coordinate system. These new methods can allow topological alterations, such as the number of holes, during the optimization process whereas the boundaries of the target structure are clearly defined. However, the re-initialization scheme used when updating the level set function is a critical problem when seeking to obtain appropriately updated outlines of target structures. In this paper, we propose a new structural optimization method based on the level set method using a new geometry-based re-initialization scheme where both the numerical analysis used when solving the equilibrium equations and the updating process of the level set function are performed using the Finite Element Method. The stiffness maximization, eigenfrequency maximization, and eigenfrequency matching problems are considered as optimization problems. Several design examples are presented to confirm the usefulness of the proposed method. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Numerical boundary conditions for globally mass conservative methods to solve the shallow-water equations and applied to river flow

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2006
J. Burguete
Abstract A revision of some well-known discretization techniques for the numerical boundary conditions in 1D shallow-water flow models is presented. More recent options are also considered in the search for a fully conservative technique that is able to preserve the good properties of a conservative scheme used for the interior points. Two conservative numerical schemes are used as representatives of the families of explicit and implicit numerical methods. The implementation of the different boundary options to these schemes is compared by means of the simulation of several test cases with exact solution. The schemes with the global conservation boundary discretization are applied to the simulation of a real river flood wave leading to very satisfactory results. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Parallel computation of a highly nonlinear Boussinesq equation model through domain decomposition

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2005
Khairil Irfan Sitanggang
Abstract Implementations of the Boussinesq wave model to calculate free surface wave evolution in large basins are, in general, computationally very expensive, requiring huge amounts of CPU time and memory. For large scale problems, it is either not affordable or practical to run on a single PC. To facilitate such extensive computations, a parallel Boussinesq wave model is developed using the domain decomposition technique in conjunction with the message passing interface (MPI). The published and well-tested numerical scheme used by the serial model, a high-order finite difference method, is identical to that employed in the parallel model. Parallelization of the tridiagonal matrix systems included in the serial scheme is the most challenging aspect of the work, and is accomplished using a parallel matrix solver combined with an efficient data transfer scheme. Numerical tests on a distributed-memory super-computer show that the performance of the current parallel model in simulating wave evolution is very satisfactory. A linear speedup is gained as the number of processors increases. These tests showed that the CPU time efficiency of the model is about 75,90%. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Performance analysis of wireless multihop diversity systems

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2008
Diomidis S. Michalopoulos
Abstract We study the performance of multihop diversity systems with non-regenerative relays over independent and non-identical Rayleigh fading channels. The analysis is based on the evaluation of the instantaneous end-to-end signal-to-noise ratio (SNR), depending on the type of the relay and the diversity scheme used. A closed-form expression is derived for the average end-to-end SNR, when fixed-gain relays and a maximal ratio combiner are used; also, an analytical expression formula for the average symbol-error rate (ASER) for the above case is presented. The results show that, as expected, multihop diversity systems outperform conventional telecommunication systems in terms of ASER when the same amount of energy is assumed to be consumed in both cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Assessing the performances of some recently proposed density functionals for the description of bond dissociations involving organic radicals

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 12 2010
Vincent Tognetti
Abstract In this article, we have assessed the performances of some recently proposed density functionals for the prediction of reaction energies involving radicals, notably bond dissociations of small organic molecules or of TEMPO-based ones, and ,-scissions, focusing on our TCA family and on range-separated hybrids. It is found that no functional belonging to these two families is able to compete with the M0x one. We have tried to improve the performances of the range-separated hybrids by the optimization of the attenuation parameter, but the improvements for one dataset lead to an unavoidable deterioration for the others. Furthermore, the differences between two different approaches to the long-range/short-range separation are discussed in terms of average enhancement factors, emphasizing the crucial choice of the approximate scheme used for the short-range part. Finally, the influence of the geometries has been considered and found to be negligible for this kind of molecular sets, validating the usual single point energies strategies developed in such benchmarking assessments. © 2010 Wiley Periodicals, Inc. Int J Quantum Chem, 2010 [source]


Adaptive coding and modulation scheme for satellite-UMTS TDD systems based on a photogrammetric channel estimation method

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 1 2006
Hsin-Piao Lin
Abstract The conventional wireless communication systems are designed to overcome the worst-case channel, using the huge amount of redundant bits to assure communications performance and quality of services. Those systems cannot achieve the optimum spectrum and power efficiency. This paper presents an adaptive coding and modulation scheme used in the user terminals of the third-generation satellite communication system. A three-state photogrammetric channel estimation method is introduced for tracing the variations of large-scale environments. The mobile user terminal dynamically switches the suitable coding and modulation schemes according to the result of photogrammetric channel estimator in order to maximize the power efficiency and data throughput. The real measurement data were used to validate our proposed method. The results show that the proposed method not only reduces the system complexity, but also mitigates the power control requirements and increases the data throughput for the land mobile satellite personal communication systems. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A simulation model of foraging behaviour and the effect of predation risk

JOURNAL OF ANIMAL ECOLOGY, Issue 1 2000
Jane F. Ward
Summary 1.,The effect of predation risk on the distribution of animals foraging in a patchy environment was explored using a simulation model based on the work of Bernstein, Kacelnik & Krebs (1988, 1991), extended to incorporate predation risk. 2.,Modelled foragers consume prey within patches, having different prey densities, at a rate that depends only on prey density and interference from other foragers in the patch. Prey density remains constant (no depletion). A forager leaves a patch if its intake rate drops below its estimate of the average intake rate available in the environment. This estimate is continually updated by taking a weighted average of current intake and the previous estimate, giving a simple learning process. Decisions on whether to leave are made at regular intervals and a forager leaving a patch may arrive at random at any other patch. 3.,Simulation results were sensitive to the computational scheme used to represent estimates of average intake rate in the environment and to the way in which foragers' decisions were distributed over time, illustrating the need for careful formulation of such models. 4.,Predation risk was modelled as a cost that reduced effective intake, and could be distributed uniformly or skewed to patches of high prey density. The effects of different levels and distributions of predation risk on distribution of foragers were examined using model parameters broadly appropriate to hedgehogs (Erinaceous europaeus L.) foraging for earthworms (Lumbricus spp.) and risking opportunistic predation by badgers (Meles meles L.). The distribution of foragers with respect to prey density took an ,ideal free' form for zero or uniform risk, and was domed when cost of predation risk was relatively high and concentrated in the richer patches, as might be the case when predators and foragers share a common food resource. [source]


Quasiperiodic impurity energy spectra of GaAs/GaxAl1,xAs superlattices

PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue S2 2004
M. S. Vasconcelos
Abstract In this work we consider a generalized Fibonacci quasiperiodic superlattice (GFQPSL), within a tight-binding model, in which its nearest-neighbor-hopping matrix elements are distributed according to the generalized Fibonacci sequence. The electronic density of states (DOS) is then determined by using a Green function method based on Dyson's equation together with a transfer-matrix treatment. The resulting energy spectrum is then determined, considering initial physical parameters according to the scheme used in the experimental realization of a GFQPSL. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Eulerian backtracking of atmospheric tracers.

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 615 2006
II: Numerical aspects
Abstract In Part I of this paper, a mathematical equivalence was established between Eulerian backtracking or retro- transport, on the one hand, and adjoint transport with respect to an air-mass-weighted scalar product, on the other. The time symmetry which lies at the basis of this mathematical equivalence can however be lost through discretization. That question is studied, and conditions are explicitly identified under which discretization schemes possess the property of time symmetry. Particular consideration is given to the case of the LMDZ model. The linear schemes used for turbulent diffusion and subgrid-scale convection are symmetric. For the Van Leer advection scheme used in LMDZ, which is nonlinear, the question of time symmetry does not even make sense. Those facts are illustrated by numerical simulations performed in the conditions of the European Transport EXperiment (ETEX). For a model that is not time-symmetric, the question arises as to whether it is preferable, in practical applications, to use the exact numerical adjoint, or the retro-transport model. Numerical results obtained in the context of one-dimensional advection show that the presence of slope limiters in the Van Leer advection scheme can produce in some circumstances unrealistic (in particular, negative) adjoint sensitivities. The retro-transport equation, on the other hand, generally produces robust and realistic results, and always preserves the positivity of sensitivities. Retro-transport may therefore be preferable in sensitivity computations, even in the context of variational assimilation. Copyright © 2006 Royal Meteorological Society [source]


A convection scheme for data assimilation: Description and initial tests

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 606 2005
Philippe Lopez
Abstract A new simplified parametrization of subgrid-scale convective processes has been developed and tested in the framework of the ECMWF Integrated Forecasting System for the purpose of variational data assimilation, singular vector calculations and adjoint sensitivity experiments. Its formulation is based on the full nonlinear convection scheme used in ECMWF forecasts, but a set of simplifications has been applied to substantially improve its linear behaviour. These include the specification of a single closure assumption based on convective available potential energy, the uncoupling of the equations for the convective mass flux and updraught characteristics and a unified formulation of the entrainment and detrainment rates. Simplified representations of downdraughts and momentum transport are also included in the new scheme. Despite these simplifications, the forecasting ability of the new convective parametrization is shown to remain satisfactory even in seasonal integrations. A detailed study of its Jacobians and the validity of the linear hypothesis is presented. The new scheme is also tested in combination with the new simplified parametrization of large-scale clouds and precipitation recently developed at ECMWF. In contrast with the simplified convective parametrization currently used in ECMWF's operational 4D-Var, its tangent-linear and adjoint versions account for perturbations of all convective quantities including convective mass flux, updraught characteristics and precipitation fluxes. Therefore the new scheme is expected to be beneficial when combined with radiative calculations that are directly affected by condensation and precipitation. Examples are presented of applications of the new moist physics in 1D-Var retrievals using microwave brightness temperature measurements and in adjoint sensitivity experiments. Copyright © 2005 Royal Meteorological Society. [source]


The parametrization of drag induced by stratified flow over anisotropic orography

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 568 2000
J. F. Scinocca
Abstract A new parametrization of drag arising from the flow over unresolved topography (UT) in a general-circulation model (GCM) is presented. It is comprised of three principle components: a parametrization of the source spectrum and drag associated with freely propagating hydrostatic gravity waves in the absence of rotation, a parametrization of the drag associated with low-level wave breaking, and a parametrization of low-level drag associated with upstream blocking and lee-vortex dynamics. Novel features of the scheme include: a new procedure for defining the UT in each GCM grid cell which takes account of the GCM resolution and includes only the scales represented by the parametrization scheme, a new method of representing the azimuthal distribution of vertical momentum flux by two waves whose direction and magnitude systematically vary with the flow direction and with the anisotropy of the UT in each GCM grid cell, and a new application of form drag in the lowest levels which can change the direction of the low-level flow so that it is more parallel to unresolved two-dimensional topographic ridges. The new scheme is tested in the Canadian Centre for Climate Modelling and Analysis third generation atmospheric GCM at horizontal resolutions of T47 and T63. Five-year seasonal means of present-day climate show that the new scheme improves mean sea level pressures (or mass distribution) and improves the tropospheric circulation when compared with the gravity-wave drag scheme used currently in the GCM. The benefits are most pronounced during northern hemisphere winter. It is also found that representing the azimuthal distribution of the momentum flux of the freely propagating gravity-wave field with two waves rather than just one allows 30-50% more gravity-wave momentum flux up into the middle atmosphere, depending on the season. The additional momentum flux into the middle atmosphere is expected to have a beneficial impact on GCMs that employ a more realistic representation of the stratosphere. [source]


The Impact of Injury Coding Schemes on Predicting Hospital Mortality After Pediatric Injury

ACADEMIC EMERGENCY MEDICINE, Issue 7 2009
Randall S. Burd MD
Abstract Objectives:, Accurate adjustment for injury severity is needed to evaluate the effectiveness of trauma management. While the choice of injury coding scheme used for modeling affects performance, the impact of combining coding schemes on performance has not been evaluated. The purpose of this study was to use Bayesian logistic regression to develop models predicting hospital mortality in injured children and to compare the performance of models developed using different injury coding schemes. Methods:, Records of children (age < 15 years) admitted after injury were obtained from the National Trauma Data Bank (NTDB) and the National Pediatric Trauma Registry (NPTR) and used to train Bayesian logistic regression models predicting mortality using three injury coding schemes (International Classification of Disease-9th revision [ICD-9] injury codes, the Abbreviated Injury Scale [AIS] severity scores, and the Barell matrix) and their combinations. Model performance was evaluated using independent data from the NTDB and the Kids' Inpatient Database 2003 (KID). Results:, Discrimination was optimal when modeling both ICD-9 and AIS severity codes (area under the receiver operating curve [AUC] = 0.921 [NTDB] and 0.967 [KID], Hosmer-Lemeshow [HL] h-statistic = 115 [NTDB] and 147 [KID]), while calibration was optimal when modeling coding based on the Barell matrix (AUC = 0.882 [NTDB] and 0.936 [KID], HL h-statistic = 19 [NTDB] and 69 [KID]). When compared to models based on ICD-9 codes alone, models that also included AIS severity scores and coding from the Barell matrix showed improved discrimination and calibration. Conclusions:, Mortality models that incorporate additional injury coding schemes perform better than those based on ICD-9 codes alone in the setting of pediatric trauma. Combining injury coding schemes may be an effective approach for improving the predictive performance of empirically derived estimates of injury mortality. [source]


Assessment of conservative load transfer for fluid,solid interface with non-matching meshes

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2005
R. K. Jaiman
Abstract We present a detailed comparative study of three conservative schemes used to transfer interface loads in fluid,solid interaction simulations involving non-matching meshes. The three load transfer schemes investigated are the node-projection scheme, the quadrature-projection scheme and the common-refinement based scheme. The accuracy associated with these schemes is assessed with the aid of 2-D fluid,solid interaction problems of increasing complexity. This includes a static load transfer and three transient problems, namely, elastic piston, superseismic shock and flexible inhibitor involving large deformations. We show how the load transfer schemes may affect the accuracy of the solutions along the fluid,solid interface and in the fluid and solid domains. We introduce a grid mismatching function which correlates well with the errors of the traditional load transfer schemes. We also compare the computational costs of these load transfer schemes. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Comparison of the numerical stability of methods for anharmonic calculations of vibrational molecular energies

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 10 2007
Petr Dan
Abstract On model examples, we compare the performance of the vibrational self-consistent field, variational, and four perturbational schemes used for computations of vibrational energies of semi-rigid molecules, with emphasis on the numerical stability. Although the accuracy of the energies is primarily dependent on the quality of the potential energy surface, approximate approaches to the anharmonic vibrational problem often do not converge to the same results due to the approximations involved. For furan, the sensitivity to variations of the anharmonic potential was systematically investigated by adding random noise to the cubic and quartic constants. The self-consistent field methods proved to be the most resistant to the potential variations. The second order perturbational techniques are sensitive to random degeneracies and provided the least stable results. However, their stability could be significantly improved by a simple generalization of the perturbational formula. The variational configuration interaction is practically limited by the size of the matrix that can be diagonalized for larger molecules; however, relatively fewer states need to be involved than for smaller ones, in favor of the computing. © 2007 Wiley Periodicals, Inc. J Comput Chem, 2007 [source]


Incentives and Bonuses , The Case of the 2006 World Cup

KYKLOS INTERNATIONAL REVIEW OF SOCIAL SCIENCES, Issue 3 2007
Tom Coupé
SUMMARY This paper investigates the determinants and effects of bonus schemes used during the World Cup 2006. [source]


Evaluation: Best evidence on the educational effects of undergraduate portfolios

THE CLINICAL TEACHER, Issue 3 2010
Sharon Buckley
Summary Background:, The great variety of portfolio types and schemes used in the education of health professionals is reflected in the extensive and diverse educational literature relating to portfolio use. We have recently completed a Best Evidence Medical Education (BEME) systematic review of the literature relating to the use of portfolios in the undergraduate setting that offers clinical teachers insights into both their effects on learning and issues to consider in portfolio implementation. Methods:, Using a methodology based on BEME recommendations, we searched the literature relating to a range of health professions, identifying evidence for the effects of portfolios on undergraduate student learning, and assessing the methodological quality of each study. Results:, The higher quality studies in our review report that, when implemented appropriately, portfolios can improve students' ability to integrate theory with practice, can encourage their self-awareness and reflection, and can offer support for students facing difficult emotional situations. Portfolios can also enhance student,tutor relationships and prepare students for the rigours of postgraduate training. However, the time required to complete a portfolio may detract from students' clinical learning. An analysis of methodological quality against year of publication suggests that, across a range of health professions, the quality of the literature relating to the educational effects of portfolios is improving. However, further work is still required to build the evidence base for the educational effects of portfolios, particularly comparative studies that assess effects on learning directly. Discussion:, Our findings have implications for the design and implementation of portfolios in the undergraduate setting. [source]