Very Accurate (very + accurate)

Distribution by Scientific Domains


Selected Abstracts


Bayesian Networks and Adaptive Management of Wildlife Habitat

CONSERVATION BIOLOGY, Issue 4 2010
ALISON L. HOWES
herramientas para la toma de decisiones; incertidumbre ecológica; pastoreo feral; regímenes de quema; validación de modelos Abstract:,Adaptive management is an iterative process of gathering new knowledge regarding a system's behavior and monitoring the ecological consequences of management actions to improve management decisions. Although the concept originated in the 1970s, it is rarely actively incorporated into ecological restoration. Bayesian networks (BNs) are emerging as efficient ecological decision-support tools well suited to adaptive management, but examples of their application in this capacity are few. We developed a BN within an adaptive-management framework that focuses on managing the effects of feral grazing and prescribed burning regimes on avian diversity within woodlands of subtropical eastern Australia. We constructed the BN with baseline data to predict bird abundance as a function of habitat structure, grazing pressure, and prescribed burning. Results of sensitivity analyses suggested that grazing pressure increased the abundance of aggressive honeyeaters, which in turn had a strong negative effect on small passerines. Management interventions to reduce pressure of feral grazing and prescribed burning were then conducted, after which we collected a second set of field data to test the response of small passerines to these measures. We used these data, which incorporated ecological changes that may have resulted from the management interventions, to validate and update the BN. The network predictions of small passerine abundance under the new habitat and management conditions were very accurate. The updated BN concluded the first iteration of adaptive management and will be used in planning the next round of management interventions. The unique belief-updating feature of BNs provides land managers with the flexibility to predict outcomes and evaluate the effectiveness of management interventions. Resumen:,El manejo adaptativo es un proceso interactivo de recopilación de conocimiento nuevo relacionado con el comportamiento de un sistema y el monitoreo de las consecuencias ecológicas de las acciones de manejo para refinar las opciones de manejo. Aunque el concepto se originó en la década de los 1970s, rara vez es incorporado activamente en la restauración ecológica. Las redes Bayesianas (RBs) están emergiendo como herramientas eficientes para la toma de decisiones ecológicas en el contexto del manejo adaptativo, pero los ejemplos de su aplicación en este sentido son escasos. Desarrollamos una RB en el marco del manejo adaptativo que se centra en el manejo de los efectos del pastoreo feral y los regímenes de quemas prescritas sobre la diversidad de aves en bosques subtropicales del este de Australia. Construimos la RB con datos para predecir la abundancia de aves como una función de la estructura del hábitat, la presión de pastoreo y las quemas prescritas. Los resultados del análisis de sensibilidad sugieren que la presión de pastoreo incrementó la abundancia de melífagos agresivos, que a su vez tuvieron un fuerte efecto negativo sobre paserinos pequeños. Posteriormente se llevaron a cabo intervenciones de manejo para reducir la presión del pastoreo feral y quemas prescritas, después de las cuales recolectamos un segundo conjunto de datos de campo para probar la respuesta de paserinos pequeños a estas medidas. Utilizamos estos datos, que incorporaron cambios ecológicos que pueden haber resultado de la intervención de manejo, para validar y actualizar la RB. Las predicciones de la abundancia de paserinos pequeños bajo las nuevas condiciones de hábitat y manejo fueron muy precisas. La RB actualizada concluyó la primera iteración de manejo adaptativo y será utilizada para la planificación de la siguiente ronda de intervenciones de manejo. La característica única de actualización de la RBs permite que los manejadores tengan flexibilidad para predecir los resultados y evaluar la efectividad de las intervenciones de manejo. [source]


Fabrication of enclosed SU-8 tips for electrospray ionization-mass spectrometry

ELECTROPHORESIS, Issue 24 2005
Santeri Tuomikoski Dr.
Abstract We describe a novel electrospray tip design for MS which is fabricated completely out of SU-8 photoepoxy. A three-layer SU-8 fabrication process provides fully enclosed channels and tips. The tip shape and alignment of all SU-8 layers is done lithographically and is therefore very accurate. Fabrication process enables easy integration of additional fluidic functions on the same chip. Separation channels can be made with exactly the same process. Fluidic inlets are made in SU-8 during the fabrication process and no drilling or other postprocessing is needed. Channels have been fabricated and tested in the size range of 10,,m×10,,m,50,,m×200,,m. Mass spectrometric performance of the tips has been demonstrated with both pressure-driven flow and EOF. SU-8 microtips have been shown to produce stable electrospray with EOF in a timescale of tens of minutes. With pressure driven flow stable spray is maintained for hours. Taylor cone was shown to be small in volume and well defined even with the largest channel cross section. The spray was also shown to be well directed with our tip design. [source]


Steady infiltration from buried point source into heterogeneous cross-anisotropic unsaturated soil

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2004
G. J. Chen
Abstract The paper presents the analytical solution for the steady-state infiltration from a buried point source into two types of heterogeneous cross-anisotropic unsaturated half-spaces. In the first case, the heterogeneity of the soil is modelled by an exponential relationship between the hydraulic conductivity and the soil depth. In the second case, the heterogeneous soil is represented by a multilayered half-space where each layer is homogeneous. The hydraulic conductivity varies exponentially with moisture potential and this leads to the linearization of the Richards equation governing unsaturated flow. The analytical solution is obtained by using the Hankel integral transform. For the multilayered case, the combination of a special forward and backward transfer matrix techniques makes the numerical evaluation of the solution very accurate and efficient. The correctness of both formulations is validated by comparison with alternative solutions for two different cases. The results from typical cases are presented to illustrate the influence on the flow field of the cross-anisotropic hydraulic conductivity, the soil heterogeneity and the depth of the source. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Asymptotic upper bounds for the errors of Richardson extrapolation with practical application in approximate computations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2009
Aram Soroushian
Abstract The results produced by Richardson extrapolation, though, in general, very accurate, are inexact. Numerical evaluation of this inexactness and implementation of the evaluation in practice are the objectives of this paper. First, considering linear changes of errors in the convergence plots, asymptotic upper bounds are proposed for the errors. Then, the achievement is extended to the results produced by Richardson extrapolation, and finally, an error-controlling procedure is proposed and successfully implemented in approximate computations originated in science and engineering. Copyright © 2009 John Wiley & Sons, Ltd. [source]


On singularities in the solution of three-dimensional Stokes flow and incompressible elasticity problems with corners

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2004
A. Dimitrov
Abstract In this paper, a numerical procedure is presented for the computation of corner singularities in the solution of three-dimensional Stokes flow and incompressible elasticity problems near corners of various shape. For obtaining the order and mode of singularity, a neighbourhood of the singular point is considered with only local boundary conditions. The weak formulation of this problem is approximated using a mixed u, p Galerkin,Petrov finite element method. Additionally, a separation of variables is used to reduce the dimension of the original problem. As a result, the quadratic eigenvalue problem (P+,Q+,2R)d=0 is obtained, where the saddle-point-type matrices P, Q, R are defined explicitly. For a numerical solution of the algebraic eigenvalue problem an iterative technique based on the Arnoldi method in combination with an Uzawa-like scheme is used. This technique needs only one direct matrix factorization as well as few matrix,vector products for finding all eigenvalues in the interval ,,(,) , (,0.5, 1.0), as well as the corresponding eigenvectors. Some benchmark tests show that this technique is robust and very accurate. Problems from practical importance are also analysed, for instance the surface-breaking crack in an incompressible elastic material and the three-dimensional viscous flow of a Newtonian fluid past a trihedral corner. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A pseudospectral Fourier method for a 1D incompressible two-fluid model

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2008
H. Holmås
Abstract This paper presents an accurate and efficient pseudospectral (PS) Fourier method for a standard 1D incompressible two-fluid model. To the knowledge of the authors, it is the first PS method developed for the purpose of modelling waves in multiphase pipe flow. Contrary to conventional numerical methods, the PS method combines high accuracy and low computational costs with flexibility in terms of handling higher order derivatives and different types of partial differential equations. In an effort to improve the description of the stratified wavy flow regime, it can thus serve as a valuable tool for testing out new two-fluid model formulations. The main part of the algorithm is based on mathematical reformulations of the governing equations combined with extensive use of fast Fourier transforms. All the linear operations, including differentiations, are performed in Fourier space, whereas the nonlinear computations are performed in physical space. Furthermore, by exploiting the concept of an integrating factor, all linear parts of the problem are integrated analytically. The remaining nonlinear parts are advanced in time using a Runge,Kutta solver with an adaptive time step control. As demonstrated in the results section, these steps in sum yield a very accurate, fast and stable numerical method. A grid refinement analysis is used to compare the spatial convergence with the convergence rates of finite difference (FD) methods of up to order six. It is clear that the exponential convergence of the PS method is by far superior to the algebraic convergence of the FD schemes. Combined with the fact that the scheme is unconditionally linearly stable, the resulting increase in accuracy opens for several orders of magnitude savings in computational time. Finally, simulations of small amplitude, long wavelength sinusoidal waves are presented to illustrate the remarkable ability of the PS method to reproduce the linear stability properties of the two-fluid model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Propagation delay of an RC-circuit with a ramp input: An analytical very accurate and simple model

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 9 2009
Rosario Mita
Abstract In this letter, one of the models reported in Mita et al. (IEEE Trans. Circuits Syst.,II: Express Briefs 2007; 54(1):66,70) for estimating the propagation delay of an RC-chain with a linear input is revised and improved. The extended model, while maintaining the same simplicity, has a reduced error which is six times lower than the original model, being always as low as 1%. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A wavelet-based piecewise approach for steady-state analysis of power electronics circuits

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 5 2006
K. C. Tam
Abstract Simulation of steady-state waveforms is important to the design of power electronics circuits, as it reveals the maximum voltage and current stresses being imposed upon specific devices and components. This paper proposes an improved approach to finding steady-state waveforms of power electronics circuits based on wavelet approximation. The proposed method exploits the time-domain piecewise property of power electronics circuits in order to improve the accuracy and computational efficiency. Instead of applying one wavelet approximation to the whole period, several wavelet approximations are applied in a piecewise manner to fit the entire waveform. This wavelet-based piecewise approximation approach can provide very accurate and efficient solution, with much less number of wavelet terms, for approximating steady-state waveforms of power electronics circuits. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Modeling and synthesis of the interdigital/stub composite right/left-handed artificial transmission line

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 5 2009
R. Siragusa
Abstract An efficient design procedure, including both analysis and synthesis, is proposed for Composite Right/Left Handed (CRLH) interdigital/stub structures. Improved models are developed for both the interdigital capacitor and the shorted stub inductor including its ground via hole. Subsequent optimal formulas are recommended to model these components with their parasitic effects. The models and formulas are verified by both full-wave and experimental results. A CAD program with a friendly GUI, available online, is provided and its operation is described in details. This program allows a very fast design of the CRLH structure, and its synthesis parameters are proven very accurate without any full-wave optimization. © 2009 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009. [source]


An algorithm for the uniform sampling of iso-energy surfaces and for the calculation of microcanonical averages

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 4 2006
Arnaldo RapalloArticle first published online: 17 JAN 200
Abstract In this article an algorithm is proposed to efficiently perform the uniform sampling of an iso-energy surface corresponding to a fixed potential energy U of a molecular system, and for calculating averages of certain quantities over microstates having this energy (microcanonical averages). The developed sampling technique is based upon the combination of a recently proposed method for performing constant potential energy molecular dynamics simulations [Rapallo, A. J Chem Phys 2004, 121, 4033] with well-established thermostatting techniques used in the framework of standard molecular dynamics simulations, such as the Andersen thermostat, and the Nose,Hoover chain thermostat. The proposed strategy leads to very accurate and drift-free potential energy conservation during the whole sampling process, and, very important, specially when dealing with high-dimensional or complicated potential functions, it does not require the calculation of the potential energy function hessian. The technique proved to be very reliable for sampling both low- and high-dimensional surfaces. © 2006 Wiley Periodicals, Inc. J Comput Chem 27: 414,425, 2006 [source]


Crisis Management in France: Trends, Shifts and Perspectives

JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT, Issue 4 2002
Patrick Lagadec
The object of this article is to give an idea of crisis management in France. I will look at two principal axes: firstly, a simplified outline of the system as it has evolved over the years and with regard to the major changes it is undergoing today; secondly, an overview of the efforts recently made by the most progressive actors in the field. Traditionally, all analyses of this type have concentrated on the French exception, that is, a centralised country answering to a strong state, largely influenced by past references, doctrines, hierarchical rules, and technical dispositions. Although this image is still very accurate in many respects, France has been progressively losing its ,classicism'. This has come about as a result, first and foremost, of the growing number of crises which contradict the logic of long,standing references. Uncertainties, multiplicity of actors, masses of information, major surprises, cross,over events and abrupt changes are but some of the elements which are increasingly difficult to absorb within pre,established historical models. With the profusion of new actors and networks of people unaware of former royal or Napoleonic regulations, the cards are largely being dealt between the public and the private, the central and the local, the national and the international, and so on. Transformation is continuously occurring by the accumulation of new laws (e.g. decentralisation) or specific adjustments (e.g. critical infrastructures). International markets and new information technologies also play a key role in this transformation. But perhaps the most powerful motor for change are crises. More often than not, crises lead to a loss of faith in yet unquestioned references, with regard to legitimacy, credibility and responsibility. France offers a highly contrasted scene as a country still resisting inevitable change. Although there is growing disorder, new opportunities are arising. Wishing to take a dynamic approach to these questions rather than a descriptive one, I have sought to distinguish the main themes and their interactions. I will particularly look at: problems raised by new crises in complex societies; the means necessary for ensuring progress (Boin; Lagadec 2000); resistance to these measures; and, finally, some of the most promising initiatives. The vocation of the European Crisis Management Academy is to share past experience as well as questions and answers in an area of great instability and critical stakes. [source]


Estimation of Age-at-Death for Adult Males Using the Acetabulum, Applied to Four Western European Populations,

JOURNAL OF FORENSIC SCIENCES, Issue 4 2007
Carme Rissech Ph.D.
Abstract:, Methods to estimate adult age from observations of skeletal elements are not very accurate and motivate the development of better methods. In this article, we test recently published method based on the acetabulum and Bayesian inference, developed using Coimbra collection (Portugal). In this study, to evaluate its utility in other populations, this methodology was applied to 394 specimens from four different documented Western European collections. Four strategies of analysis to estimate age were outlined: (a) each series separately; (b) on Lisbon collection, taken as a reference Coimbra collection; (c) on Barcelona collection, taken as a reference both Portuguese collections; and (d) on London collection taken as reference the three Iberian collections combined. Results indicate that estimates are accurate (83,100%). As might be expected, the least accurate estimates were obtained when the most distant collection was used as a reference. Observations of the fused acetabulum can be used to make accurate estimates of age for adults of any age, with less accurate estimates when a more distant reference collection is used. [source]


Machine learning for Arabic text categorization

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 8 2006
Rehab M. Duwairi
In this article we propose a distance-based classifier for categorizing Arabic text. Each category is represented as a vector of words in an m -dimensional space, and documents are classified on the basis of their closeness to feature vectors of categories. The classifier, in its learning phase, scans the set of training documents to extract features of categories that capture inherent category-specific properties; in its testing phase the classifier uses previously determined category-specific features to categorize unclassified documents. Stemming was used to reduce the dimensionality of feature vectors of documents. The accuracy of the classifier was tested by carrying out several categorization tasks on an in-house collected Arabic corpus. The results show that the proposed classifier is very accurate and robust. [source]


A SIMPLE METHOD FOR ESTIMATING BASEFLOW AT UNGAGED LOCATIONS,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2001
Kenneth W. Potter
ABSTRACT: Baseflow, or water that enters a stream from slowly varying sources such as ground water, can be critical to humans and ecosystems. We evaluate a simple method for estimating base-flow parameters at ungaged sites. The method uses one or more baseflow discharge measurements at the ungaged site and longterm streamflow data from a nearby gaged site. A given baseflow parameter, such as the median, is estimated as the product of the corresponding gage site parameter and the geometric mean of the ratios of the measured baseflow discharges and the concurrent discharges at the gage site. If baseflows at gaged and ungaged sites have a bivariate lognormal distribution with high correlation and nearly equal log variances, the estimated baseflow parameters are very accurate. We tested the proposed method using long-term streamflow data from two watershed pairs in the Driftless Area of southwestern Wisconsin. For one watershed pair, the theoretical assumptions are well met; for the other the log-variances are substantially different. In the first case, the method performs well for estimating both annual and long-term baseflow parameters. In the second, the method performs remarkably well for estimating annual mean and annual median baseflow discharge, but less well for estimating the annual lower decile and the long-term mean, median, and lower decile. In general, the use of four measurements in a year is not substantially better than the use of two. [source]


What can we learn on the thermal history of the Universe from future cosmic microwave background spectrum measurements at long wavelengths?

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2003
C. Burigana
ABSTRACT We analyse the implications of future observations of the cosmic microwave background (CMB) absolute temperature at centimetre and decimetre wavelengths, where both ground, balloon and space experiments are currently under way to complement the accurate COBE/FIRAS data available at ,, 1 cm. Our analysis shows that forthcoming ground and balloon measurements will allow a better understanding of free,free distortions but will not be able to significantly improve the constraints already provided by the FIRAS data on the possible energy exchanges in the primeval plasma. The same holds even for observations with sensitivities up to ,10 times better than those of forthcoming experiments. Thus, we have studied the impact of very high-quality data, such as those, in principle, achievable with a space experiment such as the Diffuse Microwave Emission Survey (DIMES) planned to measure the CMB absolute temperature at 0.5 ,,, 15 cm with a sensitivity of ,0.1 mK, close to that of FIRAS. We have demonstrated that such high-quality data would improve by a factor of ,50 the FIRAS results on the fractional energy exchanges, ,,/,i, associated with dissipation processes possibly occurred in a wide range of cosmic epochs, at intermediate and high redshifts (yh, 1), and that the energy dissipation epoch could also be significantly constrained. By jointly considering two dissipation processes occurring at different epochs, we demonstrated that the sensitivity and frequency coverage of a DIMES -like experiment would allow one to accurately recover the epoch and the amount of energy possibly injected into the radiation field at early and intermediate epochs even in the presence of a possible late distortion, while the constraints on the energy possibly dissipated at late epochs can be improved by a factor of ,2. In addition, such measurements can provide an independent and very accurate cross-check of FIRAS calibration. Finally, a DIMES -like experiment will be able to provide indicative independent estimates of the baryon density: the product ,bH20 can be recovered within a factor of ,2,5 even in the case of (very small) early distortions with ,,/,i, (5,2) × 10,6. On the other hand, for ,b (H0/50)2, 0.2, an independent baryon density determination with an accuracy at , per cent level, comparable to that achievable with CMB anisotropy experiments, would require an accuracy of ,1 mK or better in the measurement of possible early distortions but up to a wavelength from , few × dm to ,7 dm, according to the baryon density value. [source]


Comparison of PDQuest and Progenesis software packages in the analysis of two-dimensional electrophoresis gels

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 10 2003
Arsi T. Rosengren
Abstract Efficient analysis of protein expression by using two-dimensional electrophoresis (2-DE) data relies on the use of automated image processing techniques. The overall success of this research depends critically on the accuracy and the reliability of the analysis software. In addition, the software has a profound effect on the interpretation of the results obtained, and the amount of user intervention demanded during the analysis. The choice of analysis software that best meets specific needs is therefore of interest to the research laboratory. In this paper we compare two advanced analysis software packages, PDQuest and Progenesis. Their evaluation is based on quantitative tests at three different levels of standard 2-DE analysis: spot detection, gel matching and spot quantitation. As test materials we use three gel sets previously used in a similar comparison of Z3 and Melanie, and three sets of gels from our own research. It was observed that the quality of the test gels critically influences the spot detection and gel matching results. Both packages were sensitive to the parameter or filter settings with respect to the tendency of finding true positive and false positive spots. Quantitation results were very accurate for both analysis software packages. [source]


Moment approximation for least-squares estimators in dynamic regression models with a unit root,

THE ECONOMETRICS JOURNAL, Issue 2 2005
Jan F. Kiviet
Summary, To find approximations for bias, variance and mean-squared error of least-squares estimators for all coefficients in a linear dynamic regression model with a unit root, we derive asymptotic expansions and examine their accuracy by simulation. It is found that in this particular context useful expansions exist only when the autoregressive model contains at least one non-redundant exogenous explanatory variable. Surprisingly, the large-sample and small-disturbance asymptotic techniques give closely related results, which is not the case in stable dynamic regression models. We specialize our general expressions for moment approximations to the case of the random walk with drift model and find that they are unsatisfactory when the drift is small. Therefore, we develop what we call small-drift asymptotics which proves to be very accurate, especially when the sample size is very small. [source]


Self-assessment and continuing professional development: The Canadian perspective

THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 1 2008
FRCPC, Ivan Silver MD
Abstract Introduction: Several recent studies highlight that physicians are not very accurate at assessing their competence in clinical domains when compared to objective measures of knowledge and performance. Instead of continuing to try to train physicians to be more accurate self-assessors, the research suggests that physicians will benefit from learning programs that encourage them to reflect on their clinical practice, continuously seek answers to clinical problems they face, compare their knowledge and skills to clinical practice guidelines and benchmarks, and seek feedback from peers and their health care team. Methods: This article describes the self-assessment learning activities of the College of Family Physicians of Canada Maintenance of Proficiency program (Mainpro®) and the Royal College of Physicians and Surgeons of Canada Maintenance of Certification program. (MOC) Results: The MOC and the Mainpro® programs incorporate several self-evaluation learning processes and tools that encourage physicians to assess their professional knowledge and clinical performance against objective measures as well as guided self-audit learning activities that encourage physicians to gather information about their practices and reflect on it individually, with peers and their health care team. Physicians are also rewarded with extra credits when they participate in either of these kinds of learning activities. Discussion: In the future, practice-based learning that incorporates self-assessment learning activities will play an increasingly important role as regulators mandate that all physicians participate in continuing professional development activities. Research in this area should be directed to understanding more about reflection in practice and how we can enable physicians to be more mindful. [source]


Valuing credit derivatives using Gaussian quadrature: A stochastic volatility framework

THE JOURNAL OF FUTURES MARKETS, Issue 1 2004
Nabil Tahani
This article proposes semi-closed-form solutions to value derivatives on mean reverting assets. A very general mean reverting process for the state variable and two stochastic volatility processes, the square-root process and the Ornstein-Uhlenbeck process, are considered. For both models, semi-closed-form solutions for characteristic functions are derived and then inverted using the Gauss-Laguerre quadrature rule to recover the cumulative probabilities. As benchmarks, European call options are valued within the following frameworks: Black and Scholes (1973) (represents constant volatility and no mean reversion), Longstaff and Schwartz (1995) (represents constant volatility and mean reversion), and Heston (1993) and Zhu (2000) (represent stochastic volatility and no mean reversion). These comparisons show that numerical prices converge rapidly to the exact price. When applied to the general models proposed (represent stochastic volatility and mean reversion), the Gauss-Laguerre rule proves very efficient and very accurate. As applications, pricing formulas for credit spread options, caps, floors, and swaps are derived. It also is shown that even weak mean reversion can have a major impact on option prices. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:3,35, 2004 [source]


Estimating the duration of common elective operations: implications for operating list management

ANAESTHESIA, Issue 8 2006
J. J. Pandit
Summary Over-running operating lists are known to be a common cause of cancellation of operations on the day of surgery. We investigated whether lists were overbooked because surgeons were optimistic in their estimates of the time that operations would take to complete. We used a questionnaire to assess the estimates of total operation time of 22 surgeons, 35 anaesthetists and 16 senior nursing staff for 31 common, general surgical and urological procedures. The response rate was 66%. We found no difference between the estimates of these three groups of staff, or between these estimates and times obtained from theatre computer records (p = 0.722). We then applied the average of the surgeons' estimates prospectively to 50 consecutive published surgical lists. Surgical estimates were very accurate in predicting the actual duration of the list (r2 = 0.61; p < 0.001), but were poor at booking the list to within its scheduled duration: 50% of lists were predictably overbooked, 50% over-ran their scheduled time, and 34% of lists suffered a cancellation. We suggest that using the estimates of operating times to plan lists would reduce the incidence of predictable over-runs and cancellations. [source]