Reasonable Accuracy (reasonable + accuracy)

Distribution by Scientific Domains


Selected Abstracts


Mean stress effects in stress-life fatigue and the Walker equation

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 3 2009
N. E. DOWLING
ABSTRACT Mean stress effects in finite-life fatigue are studied for a number of sets of experimental data for steels, aluminium alloys and one titanium alloy. Specifically, the agreement with these data is examined for the Goodman, Morrow, Smith,Watson,Topper and Walker equations. The Goodman relationship is found to be highly inaccurate. Reasonable accuracy is provided by the Morrow and by the Smith,Watson,Topper equations. But the Morrow method should not be used for aluminium alloys unless the true fracture strength is employed, instead of the more usual use of the stress-life intercept constant. The Walker equation with its adjustable fitting parameter , gives superior results. For steels, , is found to correlate with the ultimate tensile strength, and a linear relationship permits , to be estimated for cases where non-zero mean stress data are not available. Relatively high-strength aluminium alloys have ,, 0.5, which corresponds with the SWT method, but higher values of , apply for relatively low-strength aluminium alloys. For both steels and aluminium alloys, there is a trend of decreasing , with increasing strength, indicating an increasing sensitivity to mean stress. [source]


Accounting for unresolved clouds in a 1-D solar radiative-transfer model

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 608 2005
J. Li
Abstract New methods for the treatment of solar radiative transfer through overlapping and inhomogeneous clouds are presented. First, a new approach to cloud overlap is shown. For the adjacent cloud blocks, the traditional maximum overlap can be relaxed to a mixture of maximum and random overlap treatments for layers that are adjacent but not fully correlated. Second, a new radiative-transfer algorithm has been developed to deal with these various cloud overlap circumstances that is simple enough for implementation in a general-circulation model (GCM). When compared to appropriate benchmark calculations, we find that this new method can produce accurate results in heating rates and fluxes with relative errors generally less than 8%. Third, a new and very simple approach to treating radiative transfer through a cloud with horizontal subgrid-scale inhomogeneities is developed. This approach uses an optical-depth scaling technique to represent the subgrid-scale inhomogeneity. Finally, by combining all of the above elements, we provide a new algorithm for the combined treatment of cloud overlap and inhomogeneity and we show that it yields very reasonable accuracies for heating rates and fluxes. Through benchmark comparisons, we show that this new algorithm provides significant improvement over existing schemes in GCMs. Copyright © 2005 Royal Meteorological Society [source]


Rigid-plastic models for the seismic design and assessment of steel framed structures

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 14 2009
C. Mįlaga-Chuquitaype
Abstract This paper demonstrates the applicability of response history analysis based on rigid-plastic models for the seismic assessment and design of steel buildings. The rigid-plastic force,deformation relationship as applied in steel moment-resisting frames (MRF) is re-examined and new rigid-plastic models are developed for concentrically-braced frames and dual structural systems consisting of MRF coupled with braced systems. This paper demonstrates that such rigid-plastic models are able to predict global seismic demands with reasonable accuracy. It is also shown that, the direct relationship that exists between peak displacement and the plastic capacity of rigid-plastic oscillators can be used to define the level of seismic demand for a given performance target. Copyright© 2009 John Wiley & Sons, Ltd. [source]


Simplified non-linear seismic analysis of infilled reinforced concrete frames

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2005
Matja
Abstract The N2 method for simplified non-linear seismic analysis has been extended in order to make it applicable to infilled reinforced concrete frames. Compared to the simple basic variant of the N2 method, two important differences apply. A multi-linear idealization of the pushover curve, which takes into account the strength degradation which occurs after the infill fails, has to be made, and specific reduction factors, developed in a companion paper, have to be used for the determination of inelastic spectra. It is shown that the N2 method can also be used for the determination of approximate summarized IDA curves. The proposed method was applied to two test buildings. The results were compared with the results obtained by non-linear dynamic analyses for three sets of ground motions, and a reasonable accuracy was demonstrated. A similar extension of the N2 method can be made to any structural system, provided that an appropriate specific R,µ,T relation is available. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Peat carbon stocks in the southern Mackenzie River Basin: uncertainties revealed in a high-resolution case study

GLOBAL CHANGE BIOLOGY, Issue 6 2008
DAVID W. BEILMAN
Abstract The organic carbon (C) stocks contained in peat were estimated for a wetland-rich boreal region of the Mackenzie River Basin, Canada, using high-resolution wetland map data, available peat C characteristic and peat depth datasets, and geostatistics. Peatlands cover 32% of the 25 119 km2 study area, and consist mainly of surface- and/or groundwater-fed treed peatlands. The thickness of peat deposits measured at 203 sites was 2.5 m on average but as deep as 6 m, and highly variable between sites. Peat depths showed little relationship with terrain data within 1 and 5 km, but were spatially autocorrelated, and were generalized using ordinary kriging. Polygon-scale calculations and Monte Carlo simulations yielded a total peat C stock of 982,1025 × 1012 g C that varied in C mass per unit area between 53 and 165 kg m,2. This geostatistical approach showed as much as 10% more peat C than calculations using mean depths. We compared this estimate with an overlapping 7868 km2 portion of an independent peat C stock estimate for western Canada, which revealed similar values for total peatland area, total C stock, and total peat C mass per unit area. However, agreement was poor within ,875 km2 grids owing to inconsistencies in peatland cover and little relationship in peat depth between estimates. The greatest disagreement in mean peat C mass per unit area occurred in grids with the largest peatland cover, owing to the spatial coincidence of large cover and deep peat in our high-resolution assessment. We conclude that total peat C stock estimates in the southern Mackenzie Basin and perhaps in boreal western Canada are likely of reasonable accuracy. However, owing to uncertainties particularly in peat depth, the quality of information regarding the location of these large stocks at scales as wide as several hundreds of square kilometers is presently much more limited. [source]


Predicting risk selection following major changes in medicare

HEALTH ECONOMICS, Issue 4 2008
Steven D. Pizer
Abstract The Medicare Modernization Act of 2003 created several new types of private insurance plans within Medicare, starting in 2006. Some of these plan types previously did not exist in the commercial market and there was great uncertainty about their prospects. In this paper, we show that statistical models and historical data from the Medicare Current Beneficiary Survey can be used to predict the experience of new plan types with reasonable accuracy. This lays the foundation for the analysis of program modifications currently under consideration. We predict market share, risk selection, and stability for the most prominent new plan type, the stand-alone Medicare prescription drug plan (PDP). First, we estimate a model of consumer choice across Medicare insurance plans available in the data. Next, we modify the data to include PDPs and use the model to predict the probability of enrollment for each beneficiary in each plan type. Finally, we calculate mean-adjusted actual spending by plan type. We predict that adverse selection into PDPs will be substantial, but that enrollment and premiums will be stable. Our predictions correspond well to actual experience in 2006. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Evaluation of endoscopic and imaging modalities in the diagnosis of structural disorders of the ileal pouch

INFLAMMATORY BOWEL DISEASES, Issue 9 2010
Linda Tang MD
Abstract Background: Computerized tomography enterography (CTE), gastrograffin enema (GGE), magnetic resonance imaging (MRI), and pouch endoscopy (PES) have commonly been used to assess ileal pouch disorders. However, their diagnostic utility has not been systematically evaluated. The aims of this study were to compare these imaging techniques to each other and to optimize diagnosis of pouch disorders by using a combination of these diagnostic modalities. Methods: Clinical data of patients from the Pouchitis Clinic from 2003 to 2008 who had a PES and at least 1 additional imaging modalities (CTE, GGE, or MRI) used for evaluation of ileal pouch disorders were retrospectively evaluated. We analyzed the accuracy, sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) with which these tests were able diagnose pouch inlet and distal small bowel and pouch outlet strictures, pouch fistulas, sinuses, and leaks. Subsequently, accuracy was recalculated by combining 2 imaging modalities to see if this could enhance accuracy. Results: A total of 66 patients underwent evaluation with PES and 1 other imaging modality as follows: PES + CTE (n = 23), PES + GGE (n = 34), and PES + MRI (n = 26). The mean age was 41.5 ± 14.5 years, with 28 being female (42.4%). Sixty patients (90.9%) had J pouches and 59 (89.4%) had a preoperative diagnosis of ulcerative colitis. Overall, CTE, GGE, MRI, and PES all had reasonable accuracy for the diagnosis of small bowel and inlet strictures (73.9%,95.4%), outlet strictures (87.9%,92.3%), fistula (76.9%,84.8%), sinus (68.0%,93.9%), and pouch leak (83,93.9%). CTE had the lowest accuracy for small bowel and inlet strictures (73.9%) and MRI had the lowest accuracy for pouch sinus (68.0%). Combining 2 imaging tests can increase the accuracy of diagnosis to 100% for strictures, fistulas, sinus, and pouch leaks. Conclusions: CTE, GGE, MRI, and PES offer complementary information on disorders of the pouch and the combination of these tests increases diagnostic accuracy for complex cases. (Inflamm Bowel Dis 2010) [source]


Implementation of the finite element method in the three-dimensional discontinuous deformation analysis (3D-DDA)

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 15 2008
Roozbeh Grayeli
Abstract A modified three-dimensional discontinuous deformation analysis (3D-DDA) method is derived using four-noded tetrahedral elements to improve the accuracy of current 3D-DDA algorithm in practical applications. The analysis program for the modified 3D-DDA method is developed in a C++ environment and its accuracy is illustrated through comparisons with several analytical solutions that are available for selected problems. The predicted solutions for these problems using the modified 3D-DDA approach all show satisfactory agreement with the corresponding analytical results. Results presented in this paper demonstrate that the modified 3D-DDA method with discontinuous modeling capabilities offers a useful computational tool to determine stresses and deformations in practical problems involving fissured elastic media with reasonable accuracy. Copyright © 2008 John Wiley & Sons, Ltd. [source]


On computing the forces from the noisy displacement data of an elastic body

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2008
A. Narayana Reddy
Abstract This study is concerned with the accurate computation of the unknown forces applied on the boundary of an elastic body using its measured displacement data with noise. Vision-based minimally intrusive force-sensing using elastically deformable grasping tools is the motivation for undertaking this problem. Since this problem involves incomplete and inconsistent displacement/force of an elastic body, it leads to an ill-posed problem known as Cauchy's problem in elasticity. Vision-based displacement measurement necessitates large displacements of the elastic body for reasonable accuracy. Therefore, we use geometrically non-linear modelling of the elastic body, which was not considered by others who attempted to solve Cauchy's elasticity problem before. We present two methods to solve the problem. The first method uses the pseudo-inverse of an over-constrained system of equations. This method is shown to be not effective when the noise in the measured displacement data is high. We attribute this to the appearance of spurious forces at regions where there should not be any forces. The second method focuses on minimizing the spurious forces by varying the measured displacements within the known accuracy of the measurement technique. Both continuum and frame elements are used in the finite element modelling of the elastic bodies considered in the numerical examples. The performance of the two methods is compared using seven numerical examples, all of which show that the second method estimates the forces with an error that is not more than the noise in the measured displacements. An experiment was also conducted to demonstrate the effectiveness of the second method in accurately estimating the applied forces. Copyright © 2008 John Wiley & Sons, Ltd. [source]


An inverse radiation problem of simultaneous estimation of heat transfer coefficient and absorption coefficient in participating media

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2003
H. M. Park
Abstract An inverse radiation problem is investigated where the spatially varying heat transfer coefficient h(z) and the absorption coefficient , in the radiant cooler are estimated simultaneously from temperature measurements. The inverse radiation problem is solved through the minimization of a performance function, which is expressed by the sum of square residuals between calculated and observed temperature, using the conjugate gradient method. The gradient of the performance function is evaluated by means of the improved adjoint variable method that can take care of both the function estimation and the parameter estimation efficiently. The present method is found to estimate h(z) and , with reasonable accuracy even with noisy temperature measurements. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Humidity parameters from temperature: test of a simple methodology for European conditions

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 7 2008
Yvonne Andersson-Sköld
Abstract Atmospheric water content is important for local and regional climate, and for chemical processes of soluble and solute species in the atmosphere. Further, vapour pressure deficit (D) is one of the key controls on the opening of stomata in plants and is thus an important force for evapotransporation, plant respiration and biomass production and for the uptake of harmful pollutants such as ozone through the stomata. Most meteorological stations typically measure both temperature and relative humidity (RH). However, even if recorded at finer time resolution, it is usually the daily or often monthly means of RH which are published in climate reports. Unfortunately, such data cannot be used to obtain the changes in RH or vapour pressure deficit over the day, as this depends strongly on the diurnal temperature variation during the day and not upon the mean temperature. Although RH typically changes significantly over the day, the ambient vapour pressure is often remarkably constant. Here a simple method to estimate diurnal vapour pressure is evaluated, based upon an assumed constant vapour pressure, and that recorded minimum temperatures approximate dew-point temperatures. With a knowledge of only temperature, we will show that day to day estimates of vapour pressure, humidity and especially D, can be made with reasonable accuracy. This methodology is tested using meteorological data from 32 sites covering a range of locations in Europe. Such a simple methodology may be used to extract approximate diurnal curves of vapour pressures from published meteorological data which contains only minimum temperatures for each day, or where humidity data are not available. Copyright © 2007 Royal Meteorological Society [source]


Synthesis and simple method of estimating macroporosity of methyl methacrylate,divinylbenzene copolymer beads

JOURNAL OF APPLIED POLYMER SCIENCE, Issue 6 2008
Muhammad Arif Malik
Abstract Macroporous methyl methacrylate,divinylbenzene copolymer beads having diameter , 300 ,m were synthesized by free radical suspension copolymerization. The macroporosity was generated by diluting the monomers with inert organic liquid diluents. The macroporosity was varied in the range of ,0.1 to , 1.0 mL/g by varying a number of porosity controlling factors, such as the diluents, solvent to nonsolvent mixing ratios when employing a mixture of the two diluents, degree of dilution, and crosslinkage. Increase in pore volume from 0.1 to 0.45 mL/g resulted in a sharp increase in mesopores having diameters in the range of 3,20 nm whereas the macropores remained negligible when compared with mesopores. Increase in pore volume from 0.45 to 1 mL/g resulted in a sharp increase in macropores, whereas mesopores having diameters in the range of 3,20 nm remained almost constant. The mesopores having diameters in the range of 20,50 nm showed an increase with the increase in pore volume throughout the whole range of pore volume studied. Macroporosity characteristics, i.e., pore volume (Vm), surface area (SA), and pore size distributions were evaluated by mercury penetration method. Statistical analysis of the data obtained in the present study shows that the macroporosity characteristics can be estimated with a reasonable accuracy from the pore volumes, which in turn are determined from the densities of the copolymers. These results are explained on the basis of pore formation mechanism. © 2008 Wiley Periodicals, Inc. J Appl Polym Sci, 2008 [source]


Measurements of functional residual capacity during intensive care treatment: the technical aspects and its possible clinical applications

ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 9 2009
H. HEINZE
Direct measurement of lung volume, i.e. functional residual capacity (FRC) has been recommended for monitoring during mechanical ventilation. Mostly due to technical reasons, FRC measurements have not become a routine monitoring tool, but promising techniques have been presented. We performed a literature search of studies with the key words ,functional residual capacity' or ,end expiratory lung volume' and summarize the physiology and patho-physiology of FRC measurements in ventilated patients, describe the existing techniques for bedside measurement, and provide an overview of the clinical questions that can be addressed using an FRC assessment. The wash-in or wash-out of a tracer gas in a multiple breath maneuver seems to be best applicable at bedside, and promising techniques for nitrogen or oxygen wash-in/wash-out with reasonable accuracy and repeatability have been presented. Studies in ventilated patients demonstrate that FRC can easily be measured at bedside during various clinical settings, including positive end-expiratory pressure optimization, endotracheal suctioning, prone position, and the weaning from mechanical ventilation. Alveolar derecruitment can easily be monitored and improvements of FRC without changes of the ventilatory setting could indicate alveolar recruitment. FRC seems to be insensitive to over-inflation of already inflated alveoli. Growing evidence suggests that FRC measurements, in combination with other parameters such as arterial oxygenation and respiratory compliance, could provide important information on the pulmonary situation in critically ill patients. Further studies are needed to define the exact role of FRC in monitoring and perhaps guiding mechanical ventilation. [source]


Stature Estimation from Foot Length Using Universal Regression Formula in a North Indian Population

JOURNAL OF FORENSIC SCIENCES, Issue 1 2010
D.F.M., Tanuj Kanchan M.D.
Abstract:, Stature is a significant parameter in establishing identity of an unknown. Conventionally, researchers derive regression formula separately for males and females. Sex, however, may not always be determined accurately, particularly in dismembered remains and thus the need for a universal regression formula for stature estimation irrespective of sex of an individual. The study was carried out in an endogamous group of North India to compare the accuracy of sex-specific regression models for stature estimation from foot length with the models derived when the sex was presumed as unknown. The study reveals that regression equation derived for the latter can estimate stature with reasonable accuracy. Thus, stature can be estimated accurately from foot length by regression analysis even when sex remains unknown. [source]


Predictive and correlative techniques for the design, optimisation and manufacture of solid dosage forms

JOURNAL OF PHARMACY AND PHARMACOLOGY: AN INTERNATI ONAL JOURNAL OF PHARMACEUTICAL SCIENCE, Issue 1 2003
Ian J. Hardy
ABSTRACT There is much interest in predicting the properties of pharmaceutical dosage forms from the properties of the raw materials they contain. Achieving this with reasonable accuracy would aid the faster development and manufacture of dosage forms. A variety of approaches to prediction or correlation of properties are reviewed. These approaches have variable accuracy, with no single technique yet able to provide an accurate prediction of the overall properties of the dosage form. However, there have been some successes in predicting trends within a formulation series based on the physicochemical and mechanical properties of raw materials, predicting process scale-up through mechanical characterisation of materials and predicting product characteristics by process monitoring. Advances in information technology have increased predictive capability and accuracy by facilitating the analysis of complex multivariate data, mapping formulation characteristics and capturing past knowledge and experience. [source]


Prediction of the location of stationary steady-state zone positions in counterflow isotachophoresis performed under constant voltage in a vortex-stabilized annular column

JOURNAL OF SEPARATION SCIENCE, JSS, Issue 18 2007
Schurie L. M. Harrison
Abstract A theoretical model is presented and an analytical expression derived to predict the locations of stationary steady-state zone positions in ITP as a function of current for a straight channel under a constant applied voltage. Stationary zones may form in the presence of a countercurrent flow whose average velocity falls between that of a pure leader zone and of a pure trailer zone. A comparison of model predictions with experimental data from an anionic system shows that the model is able to predict the location of protein zones with reasonable accuracy once the ITP stack has formed. This result implies that an ITP stack can be precisely directed by the operator to specific positions in a channel whence portions of the stack can be removed or redirected for further processing or analysis. [source]


A Simpler Approach to Population Balance Modeling in Predicting the Performance of Ziegler-Natta Catalyzed Gas-Phase Olefin Polymerization Reactor Systems

MACROMOLECULAR REACTION ENGINEERING, Issue 2-3 2009
Randhir Rawatlal
Abstract In this work, an alternative formulation of the Population Balance Model (PBM) is proposed to simplify the mathematical structure of the reactor model. The method is based on the segregation approach applied to the recently developed unsteady state residence time distribution (RTD). It is shown that the model can predict the performance of a reactor system under unsteady flow and composition conditions. Case studies involving time-varying catalyst flowrates, reactor temperature and reactor pressure were simulated and found to predict reactor performance with reasonable accuracy. The model was used to propose a grade transition strategy that could reduce transition time by as much as two hours. [source]


Sorption and Swelling of Poly(D,L-lactic acid) and Poly(lactic-co-glycolic acid) in Supercritical CO2

MACROMOLECULAR SYMPOSIA, Issue 1 2007
Ronny Pini
Abstract Summary: The equilibrium sorption and swelling behavior in supercritical CO2 of poly(D,L-lactic acid) and poly(lactic-co-glycolic acid) has been studied at a temperature of 35,°C and at pressures up to 200 bar. Sorption was measured through a gravimetric technique and swelling by visualization. From these data, the behavior of the different polymers can be compared. In terms of partial molar volume of CO2 in the polymer matrix, all the polymers exhibit a behavior typical of rubbery systems. The experimental results have been modeled using the Sanchez-Lacombe equation of state, which is able to represent the actual behavior of the polymer-CO2 systems with reasonable accuracy. [source]


Spectacular fall of the Kendrapara H5 chondrite

METEORITICS & PLANETARY SCIENCE, Issue S8 2004
D. Dhingra
In a rare observation, the fireball was seen by two airline pilots, providing direction of the trail with reasonable accuracy, consistent with ground-based observations. A few fragments of the meteorite were subsequently recovered along the end of the trail in different parts of Kendrapara district (20°30, N; 86°26, E) of Orissa. Based on petrography and chemical composition, the meteorite is classified as H5 chondrite. The cosmogenic radionuclides54Mn,22Na,60Co, and26Al and tracks have been studied in this stony meteorite. Two of the fragments show an unusually high activity of60Co (,160 dpm/kg) indicating a meteoroid radius of 50,150 cm. Assuming that less than 10% (by weight) of the fragments could be recovered because of difficult terrain, an atmospheric mass ablation of >95% is estimated. Based on the observations of the trail and the estimated mass ablation, orbital parameters of the meteoroid have been calculated. The aphelion is found to lie in the asteroidal belt (1.8,2.4 AU), but the inclination of the orbit is large (22°,26°) with respect to the ecliptic. Noble gases have been analysed in two samples of this meteorite. He and Ne are dominantly cosmogenic. Using production rates based on the sample depth derived from60Co content,21Ne-based exposure age of 4.50 ± 0.45 Ma is derived for Kendrapara. One of the samples, known to be more deeply shielded based on high60Co activity, shows the presence of80Kr,82Kr, and128Xe produced by (n, ,) reaction on79Br,81Br, and127I, respectively. The (80Kr/82Kr)n ratio of 3.5 ± 0.9 is consistent with neutrons being mostly thermal. Trapped84Kr and132Xe are in the expected range for metamorphic grade H5. [source]


Simple models for predicting transmission properties of photonic crystal fibers

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 7 2006
Rachad Albandakji
Abstract Simple, fast, and efficient 1D models for evaluating the transmission properties of photonic crystal fibers are proposed. Using these models, axial propagation constant, chromatic dispersion, effective area, and leakage loss can be predicted with a reasonable accuracy but much faster than often time-consuming 2D analytical and numerical techniques and with much less computational resources. It is shown that the results are in good agreement with the published data available in the literature. © 2006 Wiley Periodicals, Inc. Microwave Opt Technol Lett 48: 1286,1290, 2006; Published online in Wiley InterScience (www. interscience.wiley.com). DOI 10.1002/mop.21624 [source]


Testing the accuracy of synthetic stellar libraries

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007
Lucimara P. Martins
ABSTRACT One of the main ingredients of stellar population synthesis models is a library of stellar spectra. Both empirical and theoretical libraries are used for this purpose, and the question about which one is preferable is still debated in the literature. Empirical and theoretical libraries are being improved significantly over the years, and many libraries have become available lately. However, it is not clear in the literature what are the advantages of using each of these new libraries, and how far behind models are compared to observations. Here we compare in detail some of the major theoretical libraries available in the literature with observations, aiming at detecting weaknesses and strengths from the stellar population modelling point of view. Our test is twofold: we compared model predictions and observations for broad-band colours and for high-resolution spectral features. Concerning the broad-band colours, we measured the stellar colour given by three recent sets of model atmospheres and flux distributions, and compared them with a recent UBVRIJHK calibration which is mostly based on empirical data. We found that the models can reproduce with reasonable accuracy the stellar colours for a fair interval in effective temperatures and gravities. The exceptions are (1) the U,B colour, where the models are typically redder than the observations, and (2) the very cool stars in general (V,K, 3). Castelli & Kurucz is the set of models that best reproduce the bluest colours (U,B, B,V) while Gustafsson et al. and Brott & Hauschildt more accurately predict the visual colours. The three sets of models perform in a similar way for the infrared colours. Concerning the high-resolution spectral features, we measured 35 spectral indices defined in the literature on three high-resolution synthetic libraries, and compared them with the observed measurements given by three empirical libraries. The measured indices cover the wavelength range from ,3500 to ,8700 Å. We found that the direct comparison between models and observations is not a simple task, given the uncertainties in parameter determinations of empirical libraries. Taking that aside, we found that in general the three libraries present similar behaviours and systematic deviations. For stars with Teff, 7000 K, the library by Coelho et al.is the one with best average performance. We detect that lists of atomic and molecular line opacities still need improvement, specially in the blue region of the spectrum, and for the cool stars (Teff, 4500 K). [source]


Residual Pattern Based Test for Interactions in Two-Way ANOVA

BIOMETRICAL JOURNAL, Issue 3 2008
Wei Ning
Abstract This article proposes a new test to detect interactions in replicated two-way ANOVA models, more powerful than the classical F -test and more general than the test of Terbeck and Davies (1998, Annals of Statistics26, 1279,1305) developed for the case with unconditionally identifiable interaction pattern. We use the parameterization without the conventional restrictions on the interaction terms and base our test on the maximum of the standardized disturbance estimates. We show that our test is unbiased and consistent, and discuss how to estimate the p -value of the test. In a 3 × 3 case, which is our main focus in this article, the exact p -value can be computed by using four-dimensional integrations. For a general I × J case which requires an (I , 1) × (J , 1) dimensional integration for a numerical evaluation of the exact p -value, we propose to use an improved Bonferroni inequality to estimate an upperbound of the p -value and simulations indicate a reasonable accuracy of the upperbound. Via simulations, we show that our test is more powerful than the classical F -test and also that it can deal with both situations: unconditionally identifiable and non-unconditionally identifiable cases. An application to genetic data is presented in which the new test is significant, while the classical F -test failed to detect interactions. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Calculation of ligand-nucleic acid binding free energies with the generalized-born model in DOCK

BIOPOLYMERS, Issue 2 2004
Xinshan Kang
Abstract The calculation of ligand-nucleic acid binding free energies is investigated by including solvation effects computed with the generalized-Born model. Modifications of the solvation module in DOCK, including introduction of all-atom parameters and revision of coefficients in front of different terms, are shown to improve calculations involving nucleic acids. This computing scheme is capable of calculating binding energies, with reasonable accuracy, for a wide variety of DNA-ligand complexes, RNA-ligand complexes, and even for the formation of double-stranded DNA. This implementation of GB/SA is also shown to be capable of discriminating strong ligands from poor ligands for a series of RNA aptamers without sacrificing the high efficiency of the previous implementation. These results validate this approach to screening large databases against nucleic acid targets. © 2003 Wiley Periodicals, Inc. Biopolymers 73:192,204, 2004 [source]


Monitoring the fractionation of a whey protein isolate during dead-end membrane filtration using fluorescence and chemometric methods

BIOTECHNOLOGY PROGRESS, Issue 1 2010
Rand Elshereef
Abstract During membrane-based separation of proteins, changes in protein concentration of the permeate and retentate streams occurs over time. The current work proposes a new approach for monitoring the changes in concentrations of proteins in both permeate and retentate by making use of data collected using fluorescence spectroscopy and intrinsic protein fluorescence analyzed by multivariate statistical techniques. Whey protein isolate consists mainly of ,-lactalbumin (,-LA), ,-lactoglobulin (,-LG), and small proportion of bovine serum albumin (BSA) and was used as a model system in this study. A fiber optic probe (FOP) was used to acquire multiwavelength fluorescence spectra for permeate and retentate streams at different times during UF-based separation of the components from a multicomponent solution. Multivariate regression models were developed for predicting the concentrations of ,-LA, ,-LG, and BSA by establishing a calibration model between data acquired using the FOP and the corresponding protein concentration levels measured by size-exclusion chromatography. The model was validated using FOP data that were not previously used for calibration of the regression models. This comparison showed that concentrations of ,-LA, ,-LG, and BSA could be predicted directly from FOP data within reasonable accuracy by making use of multivariate calibration tools. This approach has several attractive features including that it is nondestructive, fast, and relatively simple to perform. This technique has potential practical applications as it could offer the opportunity for in situ monitoring of membrane filtration processes by tracking individual protein transmission and selectivity of fractionation. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source]


Validation of a nomogram to predict disease progression following salvage radiotherapy after radical prostatectomy: results from the SEARCH database

BJU INTERNATIONAL, Issue 10 2009
Daniel M. Moreira
OBJECTIVE To externally validate the nomogram published by Stephenson et al. (termed the ,Stephenson nomogram') to predict disease progression after salvage radiotherapy (SRT) among patients with prostate cancer from the Shared Equal Access Regional Cancer Hospital (SEARCH) database. PATIENTS AND METHODS We analysed data from 102 men treated with SRT for prostate-specific antigen (PSA) failure after prostatectomy, of whom 30 (29%) developed disease progression after SRT during a median follow-up of 50 months. The predicted 6-year progression-free survival (PFS) was compared to the actuarial PFS using calibration plots. The accuracy of the nomogram to risk-stratify men for progression was assessed by the concordance index. RESULTS The median PSA and PSA doubling time before SRT was 0.6 ng/mL and 10.3 months, respectively. The 6-year actuarial disease-free progression after SRT was 57% (95% confidence interval 42,69%). The overall concordance index of the Stephenson nomogram was 0.65. The nomogram predicted failure more accurately at the extremes of risk (lowest and highest) but in intermediate groups, the accuracy was less precise. Of the 11 variables used in the nomogram, only negative margins and high PSA level before SRT were significantly associated with increased disease progression. CONCLUSION The Stephenson nomogram is an important tool to predict disease progression after SRT following radical prostatectomy. It adequately predicted progression in SEARCH with reasonable accuracy. Also, in SEARCH, disease progression was predicted by similar disease characteristics. However, the overall modest performance of the model in our validation cohort indicates there is still room for improvement in predictive models for disease progression after SRT. [source]