Algorithm Used (algorithm + used)

Distribution by Scientific Domains
Distribution within Engineering


Selected Abstracts


Tabu Search Strategies for the Public Transportation Network Optimizations with Variable Transit Demand

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2008
Wei Fan
A multi-objective nonlinear mixed integer model is formulated. Solution methodologies are proposed, which consist of three main components: an initial candidate route set generation procedure (ICRSGP) that generates all feasible routes incorporating practical bus transit industry guidelines; a network analysis procedure (NAP) that decides transit demand matrix, assigns transit trips, determines service frequencies, and computes performance measures; and a Tabu search method (TSM) that combines these two parts, guides the candidate solution generation process, and selects an optimal set of routes from the huge solution space. Comprehensive tests are conducted and sensitivity analyses are performed. Characteristics analyses are undertaken and solution qualities from different algorithms are compared. Numerical results clearly indicate that the preferred TSM outperforms the genetic algorithm used as a benchmark for the optimal bus transit route network design problem without zone demand aggregation. [source]


On the inversion of multicomponent NMR relaxation and diffusion decays in heterogeneous systems

CONCEPTS IN MAGNETIC RESONANCE, Issue 2 2005
Raffaele Lamanna
Abstract The analysis of the decay of NMR signals in heterogeneous samples requires the solution of an ill-posed inverse problem to evaluate the distributions of relaxation and diffusion parameters. Laplace transform is the most widely accepted algorithm used to describe the NMR decay in heterogeneous systems. In this article we suggest that a superposition of Fredholm integrals, with different kernels, is a more suitable model for samples in which liquid and solid-like phases are both present. In addition, some algorithms for the inversion of Laplace and Fredholm inverse problems are illustrated. The quadrature methods and regularization function in connection with the use of nonlinear discretization grids are also discussed. The described inversion algorithms are tested on simulated and experimental data, and the role of noise is discussed. © 2005 Wiley Periodicals, Inc. Concepts Magn Reson Part A 26A: 78,90, 2005 [source]


Semi-empirical model for site effects on acceleration time histories at soft-soil sites.

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 11 2004
Part 1: formulation, development
Abstract A criterion is developed for the simulation of realistic artificial ground motion histories at soft-soil sites, corresponding to a detailed ground motion record at a reference firm-ground site. A complex transfer function is defined as the Fourier transform of the ground acceleration time history at the soft-soil site divided by the Fourier transform of the acceleration record at the firm-ground site. Working with both the real and the imaginary components of the transfer function, and not only with its modulus, serves to keep the statistical information about the wave phases (and, therefore, about the time variation of amplitudes and frequencies) in the algorithm used to generate the artificial records. Samples of these transfer functions, associated with a given pair of soft-soil and firm-ground sites, are empirically determined from the corresponding pairs of simultaneous records. Each function included in a sample is represented as the superposition of the transfer functions of the responses of a number of oscillators. This formulation is intended to account for the contributions of trains of waves following different patterns in the vicinity of both sites. The properties of the oscillators play the role of parameters of the transfer functions. They vary from one seismic event to another. Part of the variation is systematic, and can be explained in terms of the influence of ground motion intensity on the effective values of stiffness and damping of the artificial oscillators. Another part has random nature; it reflects the random characteristics of the wave propagation patterns associated with the different events. The semi-empirical model proposed recognizes both types of variation. The influence of intensity is estimated by means of a conventional one-dimensional shear wave propagation model. This model is used to derive an intensity-dependent modification of the values of the empirically determined model parameters in those cases when the firm-ground earthquake intensity used to determine these parameters differs from that corresponding to the seismic event for which the simulated records are to be obtained. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Spatial independent component analysis of functional MRI time-series: To what extent do results depend on the algorithm used?

HUMAN BRAIN MAPPING, Issue 3 2002
Fabrizio Esposito
Abstract Independent component analysis (ICA) has been successfully employed to decompose functional MRI (fMRI) time-series into sets of activation maps and associated time-courses. Several ICA algorithms have been proposed in the neural network literature. Applied to fMRI, these algorithms might lead to different spatial or temporal readouts of brain activation. We compared the two ICA algorithms that have been used so far for spatial ICA (sICA) of fMRI time-series: the Infomax (Bell and Sejnowski [1995]: Neural Comput 7:1004,1034) and the Fixed-Point (Hyvärinen [1999]: Adv Neural Inf Proc Syst 10:273,279) algorithms. We evaluated the Infomax- and Fixed Point-based sICA decompositions of simulated motor, and real motor and visual activation fMRI time-series using an ensemble of measures. Log-likelihood (McKeown et al. [1998]: Hum Brain Mapp 6:160,188) was used as a measure of how significantly the estimated independent sources fit the statistical structure of the data; receiver operating characteristics (ROC) and linear correlation analyses were used to evaluate the algorithms' accuracy of estimating the spatial layout and the temporal dynamics of simulated and real activations; cluster sizing calculations and an estimation of a residual gaussian noise term within the components were used to examine the anatomic structure of ICA components and for the assessment of noise reduction capabilities. Whereas both algorithms produced highly accurate results, the Fixed-Point outperformed the Infomax in terms of spatial and temporal accuracy as long as inferential statistics were employed as benchmarks. Conversely, the Infomax sICA was superior in terms of global estimation of the ICA model and noise reduction capabilities. Because of its adaptive nature, the Infomax approach appears to be better suited to investigate activation phenomena that are not predictable or adequately modelled by inferential techniques. Hum. Brain Mapping 16:146,157, 2002. © 2002 Wiley-Liss, Inc. [source]


Visual framework for development and use of constitutive models

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 15 2002
Youssef M. A. Hashash
Abstract Advanced constitutive relations are used in geotechnical engineering to capture measured soil and rock behaviour in the laboratory, and in numerical models to represent the material response. These constitutive relations have traditionally been difficult to use, understand, and develop except by a limited number of specialists. This paper describes a framework for transforming the representation of constitutive relations, as well as stress and strain quantities from a series of mathematical equations and matrix quantities to multidimensional geometric/visual objects in a dynamic interactive colour-rich display environment. The paper proposes a shift in current approaches to the development of constitutive equations and their use in numerical simulations by taking advantage of rapid advancements in information technology and computer graphics. A novel interactive visualization development and learning environment for material constitutive relations referred to as VizCoRe is presented. Visualization examples of two constitutive relations, the linear elastic with von Mises failure criteria and the Modified Cam Clay (MCC) are shown. These include two- and three-dimensional renderings of stress states and paths and yield and failure surfaces. In addition, the environment allows for the visualization of the implicit integration algorithm used for the numerical integration of both constitutive models. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Fixed-grid fluid,structure interaction in two dimensions based on a partitioned Lattice Boltzmann and p -FEM approach

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2009
S. Kollmannsberger
Abstract Over the last decade the Lattice Boltzmann method, which was derived from the kinetic gas theory, has matured as an efficient approach for solving Navier,Stokes equations. The p -FEM approach has proved to be highly efficient for a variety of problems in the field of structural mechanics. Our goal is to investigate the validity and efficiency of coupling the two approaches to simulate transient bidirectional Fluid,Structure interaction problems with geometrically non-linear structural deflections. A benchmark configuration of self-induced large oscillations for a flag attached to a cylinder can be accurately and efficiently reproduced within this setting. We describe in detail the force evaluation techniques, displacement transfers and the algorithm used to couple these completely different solvers as well as the results, and compare them with a benchmark reference solution computed by a monolithic finite element approach. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A modified node-to-segment algorithm passing the contact patch test

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2009
Giorgio Zavarise
Abstract Several investigations have shown that the classical one-pass node-to-segment (NTS) algorithms for the enforcement of contact constraints fail the contact patch test. This implies that the algorithms may introduce solution errors at the contacting surfaces, and these errors do not necessarily decrease with mesh refinement. The previous research has mainly focused on the Lagrange multiplier method to exactly enforce the contact geometry conditions. The situation is even worse with the penalty method, due to its inherent approximation that yields a solution affected by a non-zero penetration. The aim of this study is to analyze and improve the contact patch test behavior of the one-pass NTS algorithm used in conjunction with the penalty method for 2D frictionless contact. The paper deals with the case of linear elements. For this purpose, several sequential modifications of the basic formulation have been considered, which yield incremental improvements in results of the contact patch test. The final proposed formulation is a modified one-pass NTS algorithm which is able to pass the contact patch test also if used in conjunction with the penalty method. In other words, this algorithm is able to correctly reproduce the transfer of a constant contact pressure with a constant proportional penetration. Copyright © 2009 John Wiley & Sons, Ltd. [source]


An analytical model for the performance evaluation of stack-based Web cache replacement algorithms

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2010
S. Messaoud
Abstract Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. Numerous web cache replacement algorithms have appeared in the literature. Despite their diversity, a large number of them belong to a class known as stack-based algorithms. These algorithms are evaluated mainly via trace-driven simulation. The very few analytical models reported in the literature were targeted at one particular replacement algorithm, namely least recently used (LRU) or least frequently used (LFU). Further they provide a formula for the evaluation of the Hit Ratio only. The main contribution of this paper is an analytical model for the performance evaluation of any stack-based web cache replacement algorithm. The model provides formulae for the prediction of the object Hit Ratio, the byte Hit Ratio, and the delay saving ratio. The model is validated against extensive discrete event trace-driven simulations of the three popular stack-based algorithms, LRU, LFU, and SIZE, using NLANR and DEC traces. Results show that the analytical model achieves very good accuracy. The mean error deviation between analytical and simulation results is at most 6% for LRU, 6% for the LFU, and 10% for the SIZE stack-based algorithms. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A survey of current architectures for connecting wireless mobile ad hoc networks to the Internet

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2007
Habib M. Ammari
Abstract Connecting wired and wireless networks, and particularly mobile wireless ad hoc networks (MANETs) and the global Internet, is attractive in real-world scenarios due to its usefulness and praticality. Because of the various architectural mismatches between the Internet and MANETs with regard to their communication topology, routing protocols, and operation, it is necessary to introduce a hybrid interface capable of connecting to the Internet using Mobile IP protocol and to MANETs owing to an ad hoc routing protocol. Specifically, the approaches available in the literature have introduced updated versions of Mobile IP agents or access points at the edge of the Internet to help MANET nodes get multi-hop wireless Internet access. The main differences in the existing approaches concern the type of ad hoc routing protocol as well as the switching algorithm used by MANET nodes to change their current Mobile IP agents based on specific switching criteria. This paper surveys a variety of approaches to providing multi-hop wireless Internet access to MANET nodes. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Pleth variability index predicts hypotension during anesthesia induction

ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 5 2010
M. TSUCHIYA
Background: The pleth variability index (PVI) is a new algorithm used for automatic estimation of respiratory variations in pulse oximeter waveform amplitude, which might predict fluid responsiveness. Because anesthesia-induced hypotension may be partly related to patient volume status, we speculated that pre-anesthesia PVI would be able to identify high-risk patients for significant blood pressure decrease during anesthesia induction. Methods: We measured the PVI, heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) in 76 adult healthy patients under light sedation with fentanyl to obtain pre-anesthesia control values. Anesthesia was induced with bolus administrations of 1.8 mg/kg propofol and 0.6 mg/kg rocuronium. During the 3-min period from the start of propofol administration, HR, SBP, DBP, and MAP were measured at 30-s intervals. Results: HR, SBP, DBP, and MAP were significantly decreased after propofol administration by 8.5%, 33%, 23%, and 26%, respectively, as compared with the pre-anesthesia control values. Linear regression analysis that compared pre-anesthesia PVI with the decrease in MAP yielded an r value of ,0.73. Decreases in SBP and DBP were moderately correlated with pre-anesthesia PVI, while HR was not. By classifying PVI >15 as positive, a MAP decrease >25 mmHg could be predicted, with sensitivity, specificity, positive predictive, and negative predictive values of 0.79, 0.71, 0.73, and 0.77, respectively. Conclusion: Pre-anesthesia PVI can predict a decrease in MAP during anesthesia induction with propofol. Its measurement may be useful to identify high-risk patients for developing severe hypotension during anesthesia induction. [source]


Ab initio structure solution by iterative phase-retrieval methods: performance tests on charge flipping and low-density elimination

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 1 2010
Frank Fleischer
Comprehensive tests on the density-modification methods charge flipping [Oszlányi & Süt, (2004). Acta Cryst. A60, 134,141] and low-density elimination [Shiono & Woolfson (1992). Acta Cryst. A48, 451,456] for solving crystal structures are performed on simulated diffraction data of periodic structures and quasicrystals. A novel model-independent figure of merit, which characterizes the reliability of the retrieved phase of each reflection, is introduced and tested. The results of the performance tests show that the quality of the phase retrieval highly depends on the presence or absence of an inversion center and on the algorithm used for solving the structure. Charge flipping has a higher success rate for solving structures, while low-density elimination leads to a higher accuracy in phase retrieval. The best results can be obtained by combining the two methods, i.e. by solving a structure with charge flipping followed by a few cycles of low-density elimination. It is shown that these additional cycles dramatically improve the phases not only of the weak reflections but also of the strong ones. The results can be improved further by averaging the results of several runs and by applying a correction term that compensates for a reduction of the structure-factor amplitudes by averaging of inconsistently observed reflections. It is further shown that in most cases the retrieved phases converge to the best solution obtainable with a given method. [source]


Numerical instabilities in the computation of pseudopotential matrix elements

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 2 2006
Christoph van Wüllen
Abstract Steep high angular momentum Gaussian basis functions in the vicinity of a nucleus whose inner electrons are replaced by an effective core potential may lead to numerical instabilities when calculating matrix elements of the core potential. Numerical roundoff errors may be amplified to an extent that spoils any result obtained in such a calculation. Effective core potential matrix elements for a model problem are computed with high numerical accuracy using the standard algorithm used in quantum chemical codes and compared to results of the MOLPRO program. Thus, it is demonstrated how the relative and absolute errors depend an basis function angular momenta, basis function exponents and the distance between the off-center basis function and the center carrying the effective core potential. Then, the problem is analyzed and closed expressions are derived for the expected numerical error in the limit of large basis function exponents. It is briefly discussed how other algorithms would behave in the critical case, and they are found to have problems as well. The numerical stability could be increased a little bit if the type 1 matrix elements were computed without making use of a partial wave expansion. © 2005 Wiley Periodicals, Inc., J Comput Chem 27: 135,141 2006 [source]


On the influence of imaging conditions and algorithms on the quantification of surface topography

JOURNAL OF MICROSCOPY, Issue 3 2002
U. Wendt
Summary The influence of the microscopical magnification resulting in different voxel size and shape and of the algorithm on parameters used for the quantification of the surface topography is studied using topographical images obtained by confocal laser scanning microscopy. Fracture surfaces and wire-eroded surfaces of steel were used as samples. The values obtained for the global topometry parameters normalized surface area, mean profile segment length and fractal dimension depend with different degrees on the microscopic magnification and on the algorithm used to compute these values. The topometry values can only be used to establish correlations between the topography and materials properties and for the modelling of surface generating processes if the imaging and computing details are given. [source]


A RAPID METHOD OF QUANTIFYING THE RESOLUTION LIMITS OF HEAT-FLOW ESTIMATES IN BASIN MODELS

JOURNAL OF PETROLEUM GEOLOGY, Issue 2 2008
A. Beha
Deterministic forward models are commonly used to quantify the processes accompanying basin evolution. Here, we describe a workflow for the rapid calibration of palaeo heat-flow behaviour. The method determines the heat-flow history which best matches the observed data, such as vitrinite reflectance, which is used to indicate the thermal maturity of a sedimentary rock. A limiting factor in determining the heat-flow history is the ability of the algorithm used in the software for the maturity calculation to resolve information inherent in the measured data. Thermal maturation is controlled by the temperature gradient in the basin over time and is therefore greatly affected by maximum burial depth. Calibration, i.e. finding the thermal history model which best fits the observed data (e.g. vitrinite reflectance), can be a time-consuming exercise. To shorten this process, a simple pseudo-inverse model is used to convert the complex thermal behaviour obtained from a basin simulator into more simple behaviour, using a relatively simple equation. By comparing the calculated "simple" maturation trend with the observed data points using the suggested workflow, it becomes relatively straightforward to evaluate the range within which a best-fit model will be found. Reverse mapping from the simple model to the complex behaviour results in precise values for the heat-flow which can then be applied to the basin model. The goodness-of-fit between the modelled and observed data can be represented by the Mean Squared Residual (MSR) during the calibration process. This parameter shows the mean squared difference between all measured data and the respective predicted maturities. A minimum MSR value indicates the "best fit". Case studies are presented of two wells in the Horn Graben, Danish North Sea. In both wells calibrating the basin model using a constant heat-flow over time is not justified, and a more complex thermal history must be considered. The pseudo-inverse method was therefore applied iteratively to investigate more complex heat-flow histories. Neither in the observed maturity data nor in the recorded stratigraphy was there evidence for erosion which would have influenced the present-day thermal maturity pattern, and heat-flow and time were therefore the only variables investigated. The aim was to determine the simplest "best-fit" heat-flow history which could be resolved at the maximum resolution given by the measured maturity data. The conclusion was that basin models in which the predicted maturity of sedimentary rocks is calibrated solely against observed vitrinite reflectance data cannot provide information on the timing of anomalies in the heat-flow history. The pseudo inverse method, however, allowed the simplest heat-flow history that best fits the observed data to be found. [source]


Eigenfactor: Does the principle of repeated improvement result in better estimates than raw citation counts?

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 13 2008
Philip M. Davis
Eigenfactor.org, a journal evaluation tool that uses an iterative algorithm to weight citations (similar to the PageRank algorithm used for Google), has been proposed as a more valid method for calculating the impact of journals. The purpose of this brief communication is to investigate whether the principle of repeated improvement provides different rankings of journals than does a simple unweighted citation count (the method used by the Institute for Scientific Information@ [ISI]). [source]


A simple validated GIS expert system to map relative soil vulnerability and patterns of erosion during the muddy floods of 2000,2001 on the South Downs, Sussex, UK

LAND DEGRADATION AND DEVELOPMENT, Issue 4 2010
H. Faulkner
Abstract The soils of the South Downs in East Sussex, England (UK), are dominated by loessic silt (>70 per cent) and are prone to crusting. Continuing erosion of these soils means that they are thin, typically less than 25,cm thick and are becoming stonier, more droughty and less easier to work. Rates of erosion are relatively low but during extreme events, soils are vulnerable and on- and off-site erosion is a current and long-term risk. Property damage due to muddy flooding is of particular concern. Due to a long history of research interest, a rich database exists on the erosional history of an area of approximately 75,km2 of these thin, calcareous South Downs soils. In particular, during the winter of 2000,2001, Hortonian overland flow was common on certain crop types. Consequent sheet, rill and gully erosion was intense. The gullies and rills formed by runoff during these winter events were mapped in detail. In this paper, a method to estimate soil vulnerability to erosion is described and illustrated. Then, to validate the predictive efficacy of the algorithm used, the actual mapped distribution of rills and gullies following the winter events of 2001 on a particularly badly-affected site are compared with predictions from our soil erosion vulnerability model. Methods for adjusting the land-cover weightings to optimise the map fit are outlined. In a further survey of the utility of the map, it was discovered that farmers' recollections of events provided additional verification. Thus, one implication of our research is that erosion models can be validated by inviting farmers to comment on their efficacy to predict known histories. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Piecewise travelling-wave basis functions for wires

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 5 2006
I. García-Tuńón
Abstract This paper presents a method of moments (MoM) formulation for large thin-wire structures. In our approach, a modified version of the well-known Rao,Wilton,Glisson (RWG) basis functions for wires including a linear phase term is considered. This additional term allows an efficient representation of the travelling-wave modes on each wire, while it preserves the main advantages of RWG bases for arbitrarily complex wire topologies. The paper contains a detailed description of the algorithm used for the computation of the impedance matrix integrals. Finally, some results for scattering problems are presented to show the agreement with the conventional RWG-MoM solution. © 2006 Wiley Periodicals, Inc. Microwave Opt Technol Lett 48: 960,966, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.21533 [source]


Application of Richardson extrapolation to the numerical solution of partial differential equations

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2009
Clarence Burg
Abstract Richardson extrapolation is a methodology for improving the order of accuracy of numerical solutions that involve the use of a discretization size h. By combining the results from numerical solutions using a sequence of related discretization sizes, the leading order error terms can be methodically removed, resulting in higher order accurate results. Richardson extrapolation is commonly used within the numerical approximation of partial differential equations to improve certain predictive quantities such as the drag or lift of an airfoil, once these quantities are calculated on a sequence of meshes, but it is not widely used to determine the numerical solution of partial differential equations. Within this article, Richardson extrapolation is applied directly to the solution algorithm used within existing numerical solvers of partial differential equations to increase the order of accuracy of the numerical result without referring to the details of the methodology or its implementation within the numerical code. Only the order of accuracy of the existing solver and certain interpolations required to pass information between the mesh levels are needed to improve the order of accuracy and the overall solution accuracy. Using the proposed methodology, Richardson extrapolation is used to increase the order of accuracy of numerical solutions of the linear heat and wave equations and of the nonlinear St. Venant equations in one-dimension. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2009 [source]


Spectroscopic Ellipsometry as a Tool for Damage Profiling in Very Shallow Implanted Silicon

PLASMA PROCESSES AND POLYMERS, Issue 2 2006
Iordan Karmakov
Abstract Summary: Ion implantation is still one of the key steps in Si integrated circuit technology. Spectroscopic Ellipsometry (SE) detects the defects created in the implanted Si. The successful application of SE for damage profiling depends on the quality of the algorithm used for evaluating the damage profile from SE data. In this work, we present retrieved SE damage depth profiles by our previously published algorithm in the as-implanted Si with very low energies Ge+ ions- from 2 keV to 20 keV and 1 keV B+ ions (1,×,1015 cm,2). The comparisons of the SE retrieved damage depth profiles with experimental atomic concentration depth profiles, or simulated by the state-of-the-art computer ATHENA code of Silvaco TCAD suits were made. The functional relation was obtained by proper fitting of measured by SE a/c depths, and the depths named "ends of damage" with the experimental or simulated ions concentration depth profiles. For reasons not understood, the damage profile of 5 keV Ge+ in c-Si is smoother in shape beyond the a/c depth, with a longer tail. The damage profiles measured by SE for 5 keV Ge+ in c-Si with two different doses: curve 1 - for 6,×,1013 cm,2 and curve 2 - for 1,×,1015 cm,2. Curve 3 - presents the damage profile simulated by ATHENA for Ge+ ions with 5 keV; 1,×,1015 cm,2. [source]


Reconstruction of a yeast cell from X-ray diffraction data

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 4 2006
Pierre Thibault
Details are provided of the algorithm used for the reconstruction of yeast cell images in the recent demonstration of diffraction microscopy by Shapiro, Thibault, Beetz, Elser, Howells, Jacobsen, Kirz, Lima, Miao, Nieman & Sayre [Proc. Natl Acad. Sci. USA (2005), 102, 15343,15346]. Two refinements of the iterative constraint-based scheme are developed to address the current experimental realities of this imaging technique, which include missing central data and noise. A constrained power operator is defined whose eigenmodes allow the identification of a small number of degrees of freedom in the reconstruction that are negligibly constrained as a result of the missing data. To achieve reproducibility in the algorithm's output, a special intervention is required for these modes. Weak incompatibility of the constraints caused by noise in both direct and Fourier space leads to residual phase fluctuations. This problem is addressed by supplementing the algorithm with an averaging method. The effect of averaging may be interpreted in terms of an effective modulation transfer function, as used in optics, to quantify the resolution. The reconstruction details are prefaced with simulations of wave propagation through a model yeast cell. These show that the yeast cell is a strong-phase-contrast object for the conditions in the experiment. [source]


Evolution in the Assessment and Management of Trigeminal Schwannoma

THE LARYNGOSCOPE, Issue 2 2008
Bharat Guthikonda MD
Abstract Educational Objective: At the conclusion of this presentation, the participants should be able to understand the contemporary assessment and management algorithm used in the evaluation and care of patients with trigeminal schwannomas. Objectives: 1) Describe the contemporary neuroradiographic studies for the assessment of trigeminal schwannoma; 2) review the complex skull base osteology involved with these lesions; and 3) describe a contemporary management algorithm. Study Design: Retrospective review of 23 cases. Methods: Chart review. Results: From 1984 to 2006, of 23 patients with trigeminal schwannoma (10 males and 13 females, ages 14,77 years), 15 patients underwent combined transpetrosal extirpation, 5 patients underwent stereotactic radiation, and 3 were followed without intervention. Of the 15 who underwent surgery, total tumor removal was achieved in 9 patients. Cytoreductive surgery was performed in six patients; of these, four received postoperative radiation. One patient who underwent primary radiation therapy required subsequent surgery. There were no deaths in this series. Cranial neuropathies were present in 14 patients pretreatment and observed in 17 patients posttreatment. Major complications included meningitis (1), cerebrospinal fluid leakage (2), major venous occlusion (1), and temporal lobe infarction (1). Conclusions: Trigeminal schwannomas are uncommon lesions of the skull base that may occur in the middle fossa, posterior fossa, or both. Moreover, caudal extension results in their presentation in the infratemporal fossa. Contemporary diagnostic imaging, coupled with selective use of both surgery and radiation will limit mor-bidity and allow for the safe and prudent management of this uncommon lesion. [source]


Using fractional exhaled nitric oxide to guide asthma therapy: design and methodological issues for ASthma TReatment ALgorithm studies

CLINICAL & EXPERIMENTAL ALLERGY, Issue 4 2009
P. G. Gibson Prof.
Summary Background Current asthma guidelines recommend treatment based on the assessment of asthma control using symptoms and lung function. Noninvasive markers are an attractive way to modify therapy since they offer improvedselection of active treatment(s) based on individual response, and improvedtitration of treatment using markers that are better related to treatment outcomes. Aims: To review the methodological and design features of noninvasive marker studies in asthma. Methods Systematic assessment of published randomized trials of asthma therapy guided by fraction of exhaled nitric oxide(FENO). Results FENO has appeal as a marker to adjust asthma therapy since it is readily measured, gives reproducible results, and is responsive to changes in inhaled corticosteroid doses. However, the five randomised trials of FENO guided therapy have had mixed results. This may be because there are specific design and methodological issues that need to be addressed in the conduct of ASthma TReatment ALgorithm(ASTRAL) studies. There needs to be a clear dose response relationship for the active drugs used and the outcomes measured. The algorithm decision points should be based on outcomes in the population of interest rather than the range of values in healthy people, and the algorithm used needs to provide a sufficiently different result to clinical decision making in order for there to be any discernible benefit. A new metric is required to assess the algorithm performance, and the discordance:concordance(DC) ratio can assist with this. Conclusion Incorporating these design features into future FENO studies should improve the study performance and aid in obtaining a better estimate of the value of FENO guided asthma therapy. [source]