Validation Tests (validation + test)

Distribution by Scientific Domains

Selected Abstracts

A Lagrangian,Eulerian model of particle dispersion in a turbulent plane mixing layer

L.A. Oliveira
Abstract A Lagrangian,Eulerian model for the dispersion of solid particles in a two-dimensional, incompressible, turbulent flow is reported and validated. Prediction of the continuous phase is done by solving an Eulerian model using a control-volume finite element method (CVFEM). A Lagrangian model is also applied, using a Runge,Kutta method to obtain the particle trajectories. The effect of fluid turbulence upon particle dispersion is taken into consideration through a simple stochastic approach. Validation tests are performed by comparing predictions for both phases in a particle-laden, plane mixing layer airflow with corresponding measurements formerly reported by other authors. Even though some limitations are detected in the calculation of particle dispersion, on the whole the validation results are rather successful. Copyright 2002 John Wiley & Sons, Ltd. [source]

A novel method for enzyme design

Xiaolei Zhu
Abstract Rational design of enzymes is a stringent test of our understanding of protein structure and function relationship, which also has numerous potential applications. We present a novel method for enzyme design that can find good candidate protein scaffolds in a protein-ligand database based on vector matching of key residues. Residues in the vicinity of the active site were also compared according to a similarity score between the scaffold protein and the target enzyme. Suitable scaffold proteins were selected, and the side chains of residues around the active sites were rebuilt using a previously developed side-chain packing program. Triose phosphate isomerase (TIM) was used as a validation test for enzyme design. Selected scaffold proteins were found to accommodate the enzyme active sites and successfully form a good transition state complex. This method overcomes the limitations of the current enzyme design methods that use limited number of protein scaffold and based on the position of ligands. As there are a large number of protein scaffolds available in the Protein Data Band, this method should be widely applicable for various types of enzyme design. 2008 Wiley Periodicals, Inc. J Comput Chem, 2009 [source]

Use of Topological Indices of Organic Sulfur Compounds in Quantitative Structure-Retention Relationship Study

F. Safa
Abstract Structure-gas chromatographic retention index models were developed for some organic sulfur compounds at four different temperatures (60, 80, 100 and 120,C) using only topological descriptors. At first, regression models were generated for each temperature separately with high values of multiple correlation coefficient and Fisher-ratio statistics. The results of cross validation test using leave-one-out (Q2,0.956) and leave-two-out (Q2,0.953) methods showed good predictive ability of the models developed. Then, a single combined quantitative structure-retention relationship model, added temperature as a parameter, was also developed for all the temperatures, showing good statistical parameters (R=0.991 and F=728.474). The stability and validity of the combined model were verified by both internal (Q2>0.970) and external validation (Q=0.993) techniques. The results of the study indicated the efficiency of the classical topological descriptors in simultaneous prediction of retention index values of sulfur compounds at different temperatures. The topological descriptors well covered the molecular properties known to be relevant for gas chromatographic retention data, such as molecular size and degree of branching. [source]

Active and passive behaviors of soft tissues: Pelvic floor muscles

M. P. M. Pato
Abstract A new active-contraction visco-elastic numerical model of the pelvic floor (skeletal) muscle is presented. Our model includes all elements that represent the muscle constitutive behavior, contraction and relaxation. In contrast with the previous models, the activation function can be null. The complete equations are shown and exactly linearized. Small verification and validation tests are performed and the pelvis is modeled using the data from the intra-abdominal pressure tests. Copyright 2009 John Wiley & Sons, Ltd. [source]

An ,, algorithm for the windsurfer approach to adaptive robust control

Arvin Dehghani
Abstract The windsurfing approach to iterative control requires a series of controller designs with the gradual expanding of the closed-loop bandwidth, and in the end in order to stop the algorithm, some validation tests are carried out. In this paper, an ,, design algorithm is introduced in order to remove the empirical aspect from the stopping criteria and to make the procedure more systematic, hence facilitating the design. Moreover, some restrictive assumptions on the plant model are lifted and some issues with the controller design step are tackled by the introduction of a new design method. This enables us to address a wider class of practical problems. Copyright 2004 John Wiley & Sons, Ltd. [source]

A framework for network quality monitoring in the VoIP environment

Ana Flvia M. de Lima
Monitoring speech quality in Voice over IP (VoIP) networks is important to ensure a minimal acceptable level of speech quality for IP calls running through a managed network. Information such as packet loss, codec type, jitter, end-to-end delay and overall speech quality enables the network manager to verify and accurately tune parameters in order to adjust network problems. The present article proposes the deployment of a monitoring architecture that collects, stores and displays speech quality information about concluded voice calls. This architecture is based on our proposed MIB (Management Information Base) VOIPQOS, deployed for speech quality monitoring purposes. Currently, the architecture is totally implemented, but under adjustment and validation tests. In the future, the VOIPQOS MIB can be expanded to automatically analyze collected data and control VoIP clients and network parameters for tuning the overall speech quality of ongoing calls. Copyright 2006 John Wiley & Sons, Ltd. [source]

Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability. [source]

Validation of crystallographic models containing TLS or other descriptions of anisotropy

Frank Zucker
The use of TLS (translation/libration/screw) models to describe anisotropic displacement of atoms within a protein crystal structure has become increasingly common. These models may be used purely as an improved methodology for crystallographic refinement or as the basis for analyzing inter-domain and other large-scale motions implied by the crystal structure. In either case it is desirable to validate that the crystallographic model, including the TLS description of anisotropy, conforms to our best understanding of protein structures and their modes of flexibility. A set of validation tests has been implemented that can be integrated into ongoing crystallographic refinement or run afterwards to evaluate a previously refined structure. In either case validation can serve to increase confidence that the model is correct, to highlight aspects of the model that may be improved or to strengthen the evidence supporting specific modes of flexibility inferred from the refined TLS model. Automated validation checks have been added to the PARVATI and TLSMD web servers and incorporated into the CCP4i user interface. [source]

The Sloan Digital Sky Survey monitor telescope pipeline

D.L. Tucker
Abstract The photometric calibration of the Sloan Digital Sky Survey (SDSS) is a multi-step process which involves data from three different telescopes: the 1.0-m telescope at the US Naval Observatory (USNO), Flagstaff Station, Arizona (which was used to establish the SDSS standard star network); the SDSS 0.5-m Photometric Telescope (PT) at the Apache Point Observatory (APO), NewMexico (which calculates nightly extinctions and calibrates secondary patch transfer fields); and the SDSS 2.5-m telescope at APO (which obtains the imaging data for the SDSS proper). In this paper, we describe the Monitor Telescope Pipeline, MTPIPE, the software pipeline used in processing the data from the single-CCD telescopes used in the photometric calibration of the SDSS (i.e., the USNO 1.0-m and the PT). We also describe transformation equations that convert photometry on the USNO-1.0m u ,g ,r ,i ,z , system to photometry the SDSS 2.5m ugriz system and the results of various validation tests of the MTPIPE software. Further, we discuss the semi-automated PT factory, which runs MTPIPE in the day-to-day standard SDSS operations at Fermilab. Finally, we discuss the use of MTPIPE in current SDSS-related projects, including the Southern u ,g ,r ,i ,z , Standard Star project, the u ,g ,r ,i ,z , Open Star Clusters project, and the SDSS extension (SDSS-II). ( 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Validation of the postoperative nomogram for 12-year sarcoma-specific mortality

CANCER, Issue 10 2004
Fritz C. Eilber M.D.
Abstract BACKGROUND On the basis of a prospectively followed cohort of adult patients with primary soft tissue sarcoma (STS) who were treated at Memorial Sloan-Kettering Cancer Center (MSKCC; New York, NY), a nomogram for predicting sarcoma-specific mortality was developed. Although this nomogram was found to be accurate by internal validation tests, it had not been validated in an external patient cohort, and thus its universal applicability remained unproven. METHODS Between 1975 and 2002, 1167 adult patients (age , 16 years) underwent treatment for primary STS at the University of California,Los Angeles (UCLA; Los Angeles, CA). All patients treated with an ifosfamide-based chemotherapy protocol (n = 238) were excluded from the current analysis. The remaining 929 patients constituted the population on which the validation study was performed. The nomogram validation process comprised two activities. First, the extent of discrimination was quantified using the concordance index. Second, the level of calibration was assessed by grouping patients with respect to their nomogram-predicted mortality probabilities and then comparing group means with observed Kaplan,Meier estimates of disease-specific survival. RESULTS With median follow-up intervals of 48 months for all patients and 60 months for surviving patients, the 5-year and 10-year disease-specific survival rates were 77% (95% confidence interval [CI], 74,80%) and 71% (95% CI, 67,75%), respectively. Application of the nomogram to the UCLA data set yielded a concordance index of 0.76, and the observed correspondence between predicted and actual outcomes suggested a high level of calibration. CONCLUSIONS In the current study, the MSKCC Sarcoma Nomogram was found to provide accurate survival predictions when it was applied to an external cohort of patients who were treated at UCLA. Cancer 2004. 2004 American Cancer Society. [source]