Systematic Deviations (systematic + deviation)

Distribution by Scientific Domains


Selected Abstracts


Evaluation of megavoltage CT imaging protocols in patients with lung cancer

JOURNAL OF MEDICAL IMAGING AND RADIATION ONCOLOGY, Issue 1 2010
S Smith
Summary Currently, megavoltage CT studies in most centres with tomotherapy units are performed prior to every treatment for patient set-up verification and position correction. However, daily imaging adds to the total treatment time, which may cause patient discomfort as well as results in increased imaging dose. In this study, four alternative megavoltage CT imaging protocols (images obtained: during the first five fractions, once per week, alternating fractions and daily on alternative weeks) were evaluated retrospectively using the daily position correction data for 42 patients with lung cancer. The additional uncertainty introduced by using a specific protocol with respect to the daily imaging, or residual uncertainty, was analysed on a patient and population bases. The impact of less frequent imaging schedules on treatment margin calculation was also analysed. Systematic deviations were reduced with increased imaging frequency, while random deviations were largely unaffected. Mean population systematic errors were small for all protocols evaluated. In the protocol showing the greatest error, the treatment margins necessary to accommodate residual errors were 1.2, 1.3 and 1.7 mm larger in the left,right, superior,inferior and anterior,posterior directions, respectively, compared with the margins calculated using the daily imaging data. The increased uncertainty because of the use of less frequent imaging protocols may be acceptable when compared with other sources of uncertainty in lung cancer cases, such as target volume delineation and motion because of respiration. Further work needs to be carried out to establish the impact of increased residual errors on dose distribution. [source]


Evaluation of pedotransfer functions predicting hydraulic properties of soils and deeper sediments

JOURNAL OF PLANT NUTRITION AND SOIL SCIENCE, Issue 2 2004
Bernhard Wagner
Abstract Eight pedotransfer functions (PTF) originally calibrated to soil data are used for evaluation of hydraulic properties of soils and deeper sediments. Only PTFs are considered which had shown good results in previous investigations. Two data sets were used for this purpose: a data set of measured pressure heads vs. water contents of 347 soil horizons (802 measured pairs) from Bavaria (Southern Germany) and a data set of 39 undisturbed samples of tertiary sediments from deeper ground (down to 100 m depth) in the molasse basin north of the Alps, containing 840 measured water contents vs. pressure head and unsaturated hydraulic conductivity. A statistical analysis of the PTFs shows that their performance is quite similar with respect to predicting soil water contents. Less satisfactory results were obtained when the PTFs were applied to prediction of water content of sediments from deeper ground. The predicted unsaturated hydraulic conductivities show about the same uncertainty as for soils in a previous study. Systematic deviations of predicted values indicate that an adaptation of the PTFs to the specific conditions of deeper ground should be possible in order to improve predictions. Bewertung von Pedotransferfunktionen zur Prognose der hydraulischen Kennwerte von Böden und tieferen Sedimenten Acht Pedotransferfunktionen (PTF), die ursprünglich anhand von Bodendaten kalibriert wurden, werden für die Prognose der hydraulischen Kennwerte sowohl von Böden als auch tieferen Sedimenten eingesetzt. Es wurden nur PTFs untersucht, die in anderen Untersuchungen gute Ergebnisse geliefert hatten. Zwei Datensätze standen für die Bewertung der PTFs zur Verfügung: ein Datensatz mit gemessenen Saugspannungen vs. Wassergehalten von 347 über ganz Bayern verteilten Bodenhorizonten (802 Messpaare) und ein Datensatz von 39 ungestörten Sedimentproben der miozänen Oberen Süßwassermolasse (OSM) des voralpinen Molassebeckens aus Tiefen von bis zu 100 m mit insgesamt 840 gemessenen Wassergehalten vs. Saugspannungen und ungesättigten Wasserleitfähigkeiten. Die statistische Analyse der acht PTFs zeigt, dass die meisten untersuchten PTFs die gemessenen Wassergehalte der Böden ungefähr gleich gut abschätzen. Alle PTFs ergaben bei der Vorhersage der Wassergehalte der tieferen Sedimente deutlich weniger gute Ergebnisse. Dennoch konnten mit den PTFs die ungesättigten Wasserleitfähigkeiten mit etwa der gleichen Genauigkeit wie bei Böden in einer früheren Studie prognostiziert werden. Systematische Abweichungen der Prognosewerte zeigen, dass eine spezifische Anpassung der PTFs auf die Bedingungen des tieferen Untergrundes zur Verbesserung der Vorhersagegenauigkeit möglich sein müsste. [source]


Introduction to diffusion tensor imaging mathematics: Part III.

CONCEPTS IN MAGNETIC RESONANCE, Issue 2 2006
Tensor calculation, noise, optimization, simulations
Abstract The mathematical aspects of diffusion tensor magnetic resonance imaging (DTMRI, or DTI), the measurement of the diffusion tensor by magnetic resonance imaging (MRI), are discussed in this three-part series. Part III begins with a comparison of different ways to calculate the tensor from diffusion-weighted imaging data. Next, the effects of noise on signal intensities and diffusion tensor measurements are discussed. In MRI signal intensities as well as DTI parameters, noise can introduce a bias (systematic deviation) as well as scatter (random deviation) in the data. Propagation-of-error formulas are explained with examples. Step-by-step procedures for simulating diffusion tensor measurements are presented. Finally, methods for selecting the optimal b factor and number of b = 0 images for measuring several properties of the diffusion tensor, including the trace (or mean diffusivity) and anisotropy, are presented. © 2006 Wiley Periodicals, Inc. Concepts Magn Reson Part A 28A: 155,179, 2006 [source]


Unified algorithm for undirected discovery of exception rules

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 7 2005
Einoshin Suzuki
This article presents an algorithm that seeks every possible exception rule that violates a commonsense rule and satisfies several assumptions of simplicity. Exception rules, which represent systematic deviation from commonsense rules, are often found interesting. Discovery of pairs that consist of a commonsense rule and an exception rule, resulting from undirected search for unexpected exception rules, was successful in various domains. In the past, however, an exception rule represented a change of conclusion caused by adding an extra condition to the premise of a commonsense rule. That approach formalized only one type of exception and failed to represent other types. To provide a systematic treatment of exceptions, we categorize exception rules into 11 categories, and we propose a unified algorithm for discovering all of them. Preliminary results on 15 real-world datasets provide an empirical proof of effectiveness of our algorithm in discovering interesting knowledge. The empirical results also match our theoretical analysis of exceptions, showing that the 11 types can be partitioned in three classes according to the frequency with which they occur in data. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 673,691, 2005. [source]


Corruption, Productivity and Socialism

KYKLOS INTERNATIONAL REVIEW OF SOCIAL SCIENCES, Issue 2 2003
Geoffrey Wyatt
Summary The level of productivity is correlated across countries with measures of (lack of) corruption, but this appears to be due to a common association of these variables with measures of civil infrastructure, here measured by a combination of governance indexes labelled ,rule of law' and ,government effectiveness'. New instruments based on the size- and spatial-distributions of cities within the countries of the world were constructed in order to explore the causal relationships between civil infrastructure and productivity. Civil infrastructure accounts for a substantial fraction of the global variation in output per worker across countries. Within this empirical pattern there is a systematic deviation associated with the current and former socialist states, which have both lower productivity and inferior civil infrastructure than would be predicted for otherwise similar non-socialist states. However, for a given level of the index of civil infrastructure these states are also shown to have a higher level of productivity than otherwise similar non-socialist states. The unconditionally low productivity of socialist states is attributed entirely to the indirectly deleterious effects that socialism had on civil infrastructure, which more than offset its directly positive effect on output. [source]


Dynamical modelling of luminous and dark matter in 17 Coma early-type galaxies

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2007
J. Thomas
ABSTRACT Dynamical models for 17 early-type galaxies in the Coma cluster are presented. The galaxy sample consists of flattened, rotating as well as non-rotating early-types including cD and S0 galaxies with luminosities between MB=,18.79 and ,22.56. Kinematical long-slit observations cover at least the major-axis and minor-axis and extend to 1,4reff. Axisymmetric Schwarzschild models are used to derive stellar mass-to-light ratios and dark halo parameters. In every galaxy, the best fit with dark matter matches the data better than the best fit without. The statistical significance is over 95 per cent for eight galaxies, around 90 per cent for five galaxies and for four galaxies it is not significant. For the highly significant cases, systematic deviations between models without dark matter and the observed kinematics are clearly seen; for the remaining galaxies, differences are more statistical in nature. Best-fitting models contain 10,50 per cent dark matter inside the half-light radius. The central dark matter density is at least one order of magnitude lower than the luminous mass density, independent of the assumed dark matter density profile. The central phase-space density of dark matter is often orders of magnitude lower than that in the luminous component, especially when the halo core radius is large. The orbital system of the stars along the major-axis is slightly dominated by radial motions. Some galaxies show tangential anisotropy along the minor-axis, which is correlated with the minor-axis Gauss,Hermite coefficient H4. Changing the balance between data-fit and regularization constraints does not change the reconstructed mass structure significantly: model anisotropies tend to strengthen if the weight on regularization is reduced, but the general property of a galaxy to be radially or tangentially anisotropic does not change. This paper is aimed to set the basis for a subsequent detailed analysis of luminous and dark matter scaling relations, orbital dynamics and stellar populations. [source]


Testing the accuracy of synthetic stellar libraries

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007
Lucimara P. Martins
ABSTRACT One of the main ingredients of stellar population synthesis models is a library of stellar spectra. Both empirical and theoretical libraries are used for this purpose, and the question about which one is preferable is still debated in the literature. Empirical and theoretical libraries are being improved significantly over the years, and many libraries have become available lately. However, it is not clear in the literature what are the advantages of using each of these new libraries, and how far behind models are compared to observations. Here we compare in detail some of the major theoretical libraries available in the literature with observations, aiming at detecting weaknesses and strengths from the stellar population modelling point of view. Our test is twofold: we compared model predictions and observations for broad-band colours and for high-resolution spectral features. Concerning the broad-band colours, we measured the stellar colour given by three recent sets of model atmospheres and flux distributions, and compared them with a recent UBVRIJHK calibration which is mostly based on empirical data. We found that the models can reproduce with reasonable accuracy the stellar colours for a fair interval in effective temperatures and gravities. The exceptions are (1) the U,B colour, where the models are typically redder than the observations, and (2) the very cool stars in general (V,K, 3). Castelli & Kurucz is the set of models that best reproduce the bluest colours (U,B, B,V) while Gustafsson et al. and Brott & Hauschildt more accurately predict the visual colours. The three sets of models perform in a similar way for the infrared colours. Concerning the high-resolution spectral features, we measured 35 spectral indices defined in the literature on three high-resolution synthetic libraries, and compared them with the observed measurements given by three empirical libraries. The measured indices cover the wavelength range from ,3500 to ,8700 Å. We found that the direct comparison between models and observations is not a simple task, given the uncertainties in parameter determinations of empirical libraries. Taking that aside, we found that in general the three libraries present similar behaviours and systematic deviations. For stars with Teff, 7000 K, the library by Coelho et al.is the one with best average performance. We detect that lists of atomic and molecular line opacities still need improvement, specially in the blue region of the spectrum, and for the cool stars (Teff, 4500 K). [source]


High-precision calibration of spectrographs

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY: LETTERS (ELECTRONIC), Issue 1 2010
T. Wilken
ABSTRACT We present the first stringent tests of a novel calibration system based on a laser frequency comb (LFC) for radial velocity measurements. The tests were obtained with the high-resolution, optical spectrograph, High Accuracy Radial velocity Planet Searcher. By using only one echelle order, we obtain a calibration repeatability of 15 cm s,1 for exposures that are several hours apart. This is comparable with a simultaneous calibration using a Th,Ar lamp that makes use of all 72 echelle orders. In both cases, the residuals are compatible with the computed photon noise. Averaging all LFC exposures, recorded over a few hours, we could obtain a calibration curve with residuals of 2.4 m s,1. Thanks to the adjustable and optimally chosen line density of the LFC, we resolve a periodicity of 512 pixels in the calibration curve that is due to the manufacturing process of the CCD mask. Previous Th,Ar calibration was unable to resolve these systematic deviations, resulting in a deviation of up to 70 m s,1 from the true calibration curve. In future, we hope to be able to make use of all echelle orders in order to obtain a calibration repeatability below 1 cm s,1 and absolute calibration within a few m s,1. [source]


Aircraft type-specific errors in AMDAR weather reports from commercial aircraft

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 630 2008
C. Drüe
Abstract AMDAR (Aircraft Meteorological DAta Relay) automated weather reports from commercial aircraft provide an increasing amount of input data for numerical weather prediction models. Previous studies have investigated the quality of AMDAR data. Few of these studies, however, have revealed indications of systematic errors dependent upon the aircraft type. Since different airlines use different algorithms to generate AMDAR reports, it has remained unclear whether a dependency on the aircraft type is caused by physical properties of the aircraft or by different data processing algorithms. In the present study, a special AMDAR dataset was used to investigate the physical type-dependent errors of AMDAR reports. This dataset consists of AMDAR measurements by Lufthansa aircraft performing over 300 landings overall at Frankfurt Rhein/Main (EDDF/FRA) on 22 days in 2004. All of this data has been processed by the same software, implying that influences from different processing algorithms should not be expected. From the comparison of single descents to hourly averaged vertical profiles, it is shown that temperature measurements by different aircraft types can have systematic differences of up to 1 K. In contrast, random temperature errors of most types are estimated to be less than 0.3 K. It is demonstrated that systematic deviations in AMDAR wind measurements can be regarded as an error vector, which is fixed to the aircraft reference system. The largest systematic deviations in wind measurements from different aircraft types (more than 0.5 m s,1) were found to exist in the longitudinal direction (i.e. parallel to the flight direction). Copyright © 2008 Royal Meteorological Society [source]