Home About us Contact | |||
Useful Results (useful + result)
Selected AbstractsCase study: a maintenance practice used with real-time telecommunications softwareJOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2001Miroslav Popovi Abstract In this paper we present a case study of the software maintenance practice that has been successfully applied to real-time distributed systems, which are installed and fully operational in Moscow, St. Petersburg, and other cities across Russia. In this paper we concentrate on the software maintenance process, including customer request servicing, in-field error logging, role of information system, software deployment, and software quality policy, and especially the software quality prediction process. In this case study, the prediction process is shown to be integral and one of the most important parts of the software maintenance process. We include a software quality prediction procedure overview and an example of the actual practice. The quality of the new software update is predicted on the basis of the current update's quantity metrics data and quality data, and new update's quantity metrics data. For management, this forecast aids software maintenance efficiency, and cost reduction. For practitioners, the most useful result presented is the process for determining the value for the break point. We end this case study with five lessons learned. Copyright © 2001 John Wiley & Sons, Ltd. [source] An optimal test against a random walk component in a non-orthogonal unobserved components modelTHE ECONOMETRICS JOURNAL, Issue 2 2002Ralph W. Bailey Summary In this paper we consider the problem of testing the null hypothesis that a series has a constant level (possibly as part of a more general deterministic mean) against the alternative that the level follows a random walk. This problem has previously been studied by, inter alia, Nyblom and Mäkeläinen (1983) in the context of the orthogonal Gaussian random walk plus noise model. This model postulates that the noise component and the innovations to the random walk are uncorrelated. We generalize their work by deriving the locally best invariant test of a fixed level against a random walk level in the non-orthogonal case. Here the noise and random walk components are contemporaneously correlated with correlation coefficient ,. We demonstrate that the form of the optimal test in this setting is independent of ,; i.e. the test statistic previously derived for the case of ,= 0 remains the locally optimal test for all ,. This is a very useful result: it states that the locally optimal test may be achieved without prior knowledge of ,. Moreover, we show that the limiting distribution of the resulting statistic under both the null and local alternatives does not depend on ,, behaving exactly as if ,= 0. Finite sample simulations of these effects are provided to illustrate and generalizations to models with dependent errors are considered. [source] Stability of PreservCyt® for Hybrid Capture® (HC II) HPV test,DIAGNOSTIC CYTOPATHOLOGY, Issue 5 2005J. Sailors M.D. Abstract The Food and Drug Administration (FDA) has approved the Hybrid Capture® II (HC II) assay to test for the presence of high-risk types of human papilloma virus (HPV) DNA using specimens in PreservCyt® fixative for up to 21 days after collection. The ability of HC II to determine the presence of HPV DNA in actual patient samples after longer periods of storage has not been shown. To determine if specimens older than 21 days can yield useful results, 207 patient specimens that had been tested for HPV DNA by HC II (primary test) were tested again after a significant period of storage ranging from approximately 2.5 to 13.5 mo (retest). The results of the primary test and the retest agreed in 86% of the cases. The high level of agreement in the results suggests that the presence of high-risk types of HPV DNA can be determined from actual cervical cytology material in PreservCyt® with the HC II assay for at least 3 mo after specimen collection. Diagn. Cytopathol. 2005;32:260,263. © 2005 Wiley-Liss, Inc. [source] Wavelet-based simulation of spectrum-compatible aftershock accelerogramsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 11 2008S. Das Abstract In damage-based seismic design it is desirable to account for the ability of aftershocks to cause further damage to an already damaged structure due to the main shock. Availability of recorded or simulated aftershock accelerograms is a critical component in the non-linear time-history analyses required for this purpose, and simulation of realistic accelerograms is therefore going to be the need of the profession for a long time to come. This paper attempts wavelet-based simulation of aftershock accelerograms for two scenarios. In the first scenario, recorded main shock and aftershock accelerograms are available along with the pseudo-spectral acceleration (PSA) spectrum of the anticipated main shock motion, and an accelerogram has been simulated for the anticipated aftershock motion such that it incorporates temporal features of the recorded aftershock accelerogram. In the second scenario, a recorded main shock accelerogram is available along with the PSA spectrum of the anticipated main shock motion and PSA spectrum and strong motion duration of the anticipated aftershock motion. Here, the accelerogram for the anticipated aftershock motion has been simulated assuming that temporal features of the main shock accelerogram are replicated in the aftershock accelerograms at the same site. The proposed algorithms have been illustrated with the help of the main shock and aftershock accelerograms recorded for the 1999 Chi,Chi earthquake. It has been shown that the proposed algorithm for the second scenario leads to useful results even when the main shock and aftershock accelerograms do not share the same temporal features, as long as strong motion duration of the anticipated aftershock motion is properly estimated. Copyright © 2008 John Wiley & Sons, Ltd. [source] The effect of sample size and species characteristics on performance of different species distribution modeling methodsECOGRAPHY, Issue 5 2006Pilar A. Hernandez Species distribution models should provide conservation practioners with estimates of the spatial distributions of species requiring attention. These species are often rare and have limited known occurrences, posing challenges for creating accurate species distribution models. We tested four modeling methods (Bioclim, Domain, GARP, and Maxent) across 18 species with different levels of ecological specialization using six different sample size treatments and three different evaluation measures. Our assessment revealed that Maxent was the most capable of the four modeling methods in producing useful results with sample sizes as small as 5, 10 and 25 occurrences. The other methods compensated reasonably well (Domain and GARP) to poorly (Bioclim) when presented with datasets of small sample sizes. We show that multiple evaluation measures are necessary to determine accuracy of models produced with presence-only data. Further, we found that accuracy of models is greater for species with small geographic ranges and limited environmental tolerance, ecological characteristics of many rare species. Our results indicate that reasonable models can be made for some rare species, a result that should encourage conservationists to add distribution modeling to their toolbox. [source] An Equilibrium Theory of Learning, Search, and WagesECONOMETRICA, Issue 2 2010Francisco M. Gonzalez We examine the labor market effects of incomplete information about the workers' own job-finding process. Search outcomes convey valuable information, and learning from search generates endogenous heterogeneity in workers' beliefs about their job-finding probability. We characterize this process and analyze its interactions with job creation and wage determination. Our theory sheds new light on how unemployment can affect workers' labor market outcomes and wage determination, providing a rational explanation for discouragement as the consequence of negative search outcomes. In particular, longer unemployment durations are likely to be followed by lower reemployment wages because a worker's beliefs about his job-finding process deteriorate with unemployment duration. Moreover, our analysis provides a set of useful results on dynamic programming with optimal learning. [source] Geostatistical Analysis of RainfallGEOGRAPHICAL ANALYSIS, Issue 2 2010David I. F. Grimes Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia. La precipitación puede ser modelada como un campo aleatorio correlacionado espacialmente sobrepuesto a un valor de fondo (background) medio. Dadas estas propiedades, resulta apropiado utilizar métodos geoestadísticos para el análisis de datos registrados con pluviómetros distribuidos en estaciones meteorológicas. Existen sin embargo, ciertas características de este tipo de datos que deben ser tomados en cuenta para producir resultados útiles:a) la distribución de datos tiende a ser mixta y no ser normal; b) las observaciones son heterogéneas y de escasa densidad espacial; y c) los patrones de correlación espacial son varían considerablemente en el tiempo y espacio. Numerosos estudios han demostrado ya que un análisis geoestadístico riguroso ofrece mejores resultados que las otras técnicas de interpolación disponibles para este tipo de datos. Cabe resaltar que en la aplicación de estas técnicas, el uso de variogramas climatológicos y el tratamiento apropiado de áreas lluviosas versus áreas no lluviosas son consideraciones importantes. El análisis geoestadístico de lluvias tiene además la ventaja de estimar promedios areales con facilidad, proporcionar una estimación espacial de la incertidumbre, y la posibilidad de incorporar información secundaria (ej. topografía) en el modelo. Asimismo, los métodos geoestadísticos también facilitan la generación de campos de lluvia que son consistentes con las observaciones. Esto hace posible exploraciones más realistas del error y la estimación de su propagación en modelos aplicados subsecuentemente, como por ejemplo en los modelos utilizados en predicción agrícola e hidrológica. Los autores reseñan los métodos geoestadísticos utilizados para krijeage o krijeado (kriging) mediante ejemplos de su uso apropiado con datos pluviométricos en Etiopia. [source] Vegetation gradients in Atlantic Europe: the use of existing phytosociological data in preliminary investigations on the potential effects of climate change on British vegetationGLOBAL ECOLOGY, Issue 3 2000J. C. Duckworth Abstract 1This paper aims to demonstrate the use of available vegetation data from the phytosociological literature in preliminary analyses to generate hypotheses regarding vegetation and climate change. 2Data for over 3000 samples of calcareous grassland, mesotrophic grassland, heath and woodland vegetation were taken from the literature for a region in the west of Atlantic Europe and subjected to ordination by detrended correspondence analysis in order to identify the main gradients present. 3Climate data were obtained at a resolution of 0.5° from an existing database. The relationship between vegetation composition and climate was investigated by the correlation of the mean scores for the first two ordination axes for each 0.5° cell with the climate and location variables. 4The ordinations resulted in clear geographical gradients for calcareous grasslands, heaths and woodlands but not for mesotrophic grasslands. Significant correlations were shown between some of the vegetation gradients and the climate variables, with the strongest relationships occurring between the calcareous grassland gradients and July temperature, latitude and oceanicity. Some of the vegetation gradients were also inferred to reflect edaphic factors, management and vegetation history. 5Those gradients that were related to temperature were hypothesized to reflect the influence of a progressively warmer climate on species composition, providing a baseline for further studies on the influence of climate change on species composition. 6The validity of the literature data was assessed by the collection of an original set of field data for calcareous grasslands and the subsequent ordination of a dataset containing samples from both the literature and the field. The considerable overlap between the samples from the literature and the field suggest that literature data can be used, despite certain limitations. Such preliminary analyses, using readily available data, can thus achieve useful results, thereby saving lengthy and costly field visits. [source] A low-order, hexahedral finite element for modelling shells,INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2004Samuel W. Key Abstract A thin, eight-node, tri-linear displacement, hexahedral finite element is the starting point for the derivation of a constant membrane stress resultant, constant bending stress resultant shell finite element. The derivation begins by introducing a Taylor series expansion for the stress distribution in the isoparametric co-ordinates of the element. The effect of the Taylor series expansion for the stress distribution is to explicitly identify those strain modes of the element that are conjugate to the mean or average stress and the linear variation in stress. The constant membrane stress resultants are identified with the mean stress components, and the constant bending stress resultants are identified with the linear variation in stress through the thickness along with in-plane linear variations of selected components of the transverse shear stress. Further, a plane-stress constitutive assumption is introduced, and an explicit treatment of the finite element's thickness is introduced. A number of elastic simulations show the useful results that can be obtained (tip-loaded twisted beam, point-loaded hemisphere, point-loaded sphere, tip-loaded Raasch hook, and a beam bent into a ring). All of the gradient/divergence operators are evaluated in closed form providing unequivocal evaluations of membrane and bending strain rates along with the appropriate divergence calculations involving the membrane stress and bending stress resultants. The fact that a hexahedral shell finite element has two distinct surfaces aids sliding interface algorithms when a shell folds back on itself when subjected to large deformations. Published in 2004 by John Wiley & Sons, Ltd. [source] Multi-factors oriented study of P2P ChurnINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2009Dong Yang Abstract The dynamics of peers, namely Churn, is an inherent property of peer-to-peer (P2P) systems and is critical to their design and evaluation. Although every excellent P2P protocol has some solution to this issue, studies on Churn are still seldom. This paper studies various factors related to Churn, and uses them to analyze and evaluate P2P protocols. Prior researches on Churn are all based on the P2P network factors in Churn environment, and their difference is whether to use these factors as predecessor references to build Churn analytical models or as successor references to test the models. According to this difference, this paper first divides various factors into two categories: impacting Churn and affected by Churn. There is a causal relationship between these two categories. Factors impacting Churn are cause, and the factors affected by Churn are effect. In this paper, we use this causality to simulate and analyze P2P Churn. Cause is used as the input data and effect is used as the output result. Second, based on the classification of Churn factors, we present a performance evaluation framework and two comparing models. Based on the framework and models, we simulate and analyze three P2P protocols and get some useful results such as the performance of these protocols under Churn, the advantage of Chord over others, and the most important factors impacting Churn. Finally, we present a method to improve recent P2P Churn models by adding some influence factors. Copyright © 2009 John Wiley & Sons, Ltd. [source] Multivariate methods in pharmaceutical applicationsJOURNAL OF CHEMOMETRICS, Issue 3 2002Jon Gabrielsson Abstract This review covers material published within the field of pharmacy in the last five years. Articles concerning experimental design, optimization and applications of multivariate techniques have been published, from factorial designs to multivariate data analysis, and the combination of the two in multivariate design. The number of publications on this topic testifies to the good results obtained in the studies. Much of the published material highlights the usefulness of experimental design, with many articles dealing with optimization, where much effort is spent on getting useful results. Examples of multivariate data analysis are comparatively few, but these methods are gaining in use. The employment of multivariate techniques in different applications has been reviewed. The examples in this review represent just a few of the possible applications with different aims within pharmaceutical applications. A number of companies are using experimental design as a standard tool in preformulation and in combination with response surface modeling. The properties of e.g. a tablet can be optimized to fulfill a well-specified aim such as a specific release profile, hardness, disintegration time etc. However, none of the companies apply multivariate methods in all steps of the drug development process. As this is still very much a growing field, it is only a question of time before experimental design, optimization and multivariate data analysis are implemented throughout the entire formulation process, from performulation to multivariate process control. Copyright ©,2002 John Wiley & Sons, Ltd. [source] The synthesis of some 3-acylindoles revisitedJOURNAL OF HETEROCYCLIC CHEMISTRY, Issue 5 2007Vedran Hasimbegovic A study probing the scope of acylation of indoles with dicarboxylic acids in acetic anhydride has been performed, resulting in products incorporating 3-acylindole- or 1-acylindole motifs depending on the choice of the acid reactant. Synthetically useful results were only obtained from reactions involving malonic acid or Meldrum's acid. Correlations to previous studies have also been made and discussed. [source] How should we estimate driving pressure to measure interrupter resistance in children?PEDIATRIC PULMONOLOGY, Issue 9 2007MB ChB, P. Seddon BSc Abstract Interrupter resistance (Rint) is a widely used measure of airway caliber, but concerns remain about repeatability and sensitivity. Some Rint variability may derive from the linear back-extrapolation algorithm (LBE 30/70) usually used to estimate driving pressure. To investigate whether other methods of estimating driving pressure could improve repeatability and sensitivity, we studied 39 children with asthma. Two measurements of Rint,each the median of 10 interruptions,were made 5 min apart, and 14 children had a third measurement after bronchodilator (RintBD). Mouth pressure transients were analyzed using several algorithms, to compare the magnitude, repeatability, and sensitivity to bronchodilator change of Rint values yielded. Algorithms taking driving pressure from later in the transient, predictably, yielded higher values of Rint than those which back-extrapolated to time of valve closure. Algorithms which did not rely on back-extrapolation, including mean oscillation pressure (MOP) and mean plateau pressure (MP 30/70) had better repeatability. Sensitivity to detect change, calculated as ratio of bronchodilator response to repeatability coefficient (,Rint/CR), was also better for non-extrapolating algorithms: MP 30/70 1.67, LBE 30/70 1.28 (P,=,0.0004). Measuring Rint using techniques other than conventional back-extrapolation may give more consistent and clinically useful results, and these approaches merit further exploration. Pediatr Pulmonol. 2007; 42:757,763. © 2007 Wiley-Liss, Inc. [source] Designing for performance, Part 1: Aligning your HPT decisions from top to bottomPERFORMANCE IMPROVEMENT, Issue 1 2007Ryan Watkins Wanting to improve individual and organizational performance is a worthwhile ambition. Yet your success in accomplishing this relies heavily on the suitable selection, design, and development of performance technologies. Only when capable performance technologies are systematically aligned with the desired results of your organization and its partners will you achieve sustainable performance improvements. In this article, the first of a three-part series, you will find a systematic process for initiating the design of a performance system that will accomplish useful results. From identifying the performance expectations of internal and external partners to justifying the performance objectives you establish as guides for future decision making, the systematic processes described in this article will provide you with the initial tools for successfully selecting an integrated set of performance technologies that have the capacity to accomplish valuable results. [source] Application of QF-PCR for the prenatal assessment of discordant monozygotic twins for fetal sexPRENATAL DIAGNOSIS, Issue 7 2007F. J. Fernández-Martínez Abstract Objective To establish the utility of quantitative fluorescent polymerase chain reaction (QF-PCR) in order to determine the zygosity of multiple pregnancies, as well as to define the origin of the most frequent aneuploidies in amniotic fluid samples. Methods We describe the case of a monochorionic (MC) diamniotic (DA) pregnancy with phenotypically discordant twins (nuchal cystic hygroma and non-immune hydrops in twin A and no anomalies in twin B). QF-PCR was performed for rapid prenatal diagnosis in uncultured amniocytes and subsequently in cultured cells. Polymorphic markers for chromosomes X, Y, 13, 18 and 21 were used for determination of zygosity as well as sex chromosome aneuploidy. Results Twin A showed a Turner Syndrome (TS) mosaicism pattern by QF-PCR in uncultured amniocytes. The monozygotic origin of the pregnancy was determined. Interphase fluorescence in situ hybridization (I-FISH) in this sample showed a mosaicism X0/XY (83/17%). Cytogenetic analysis revealed a 45,X0 karyotype in twin A and a 46,XY karyotype in twin B. Conclusions QF-PCR is a reliable tool for the determination of the zygosity independently of the chorionicity and the fetal sex in case of twin pregnancy. Testing both direct and cultured cells can provide useful results for genetic counselling in chromosomal mosaicisms. Copyright © 2007 John Wiley & Sons, Ltd. [source] Re-analysis of 178 previously unidentifiable Mycobacterium isolates in the Netherlands in 1999,2007CLINICAL MICROBIOLOGY AND INFECTION, Issue 9 2010J. Van Ingen Clin Microbiol Infect 2010; 16: 1470,1474 Abstract Nontuberculous mycobacteria (NTM) that cannot be identified to the species level by reverse line blot hybridization assays and sequencing of the 16S rRNA gene comprise a challenge for reference laboratories. However, the number of 16S rRNA gene sequences added to online public databases is growing rapidly, as is the number of Mycobacterium species. Therefore, we re-analysed 178 Mycobacterium isolates with 53 previously unmatched 16S rRNA gene sequences, submitted to our national reference laboratory in 1999,2007. All sequences were again compared with the GenBank database sequences and the isolates were re-identified using two commercially available identification kits, targeting separate genetic loci. Ninety-three out of 178 isolates (52%) with 20 different 16S rRNA gene sequences could be assigned to validly published species. The two reverse line blot assays provided false identifications for three recently described species and 22 discrepancies were recorded in the identification results between the two reverse line blot assays. Identification by reverse line blot assays underestimates the genetic heterogeneity among NTM. This heterogeneity can be clinically relevant because particular sub-groupings of species can cause specific disease types. Therefore, sequence-based identification is preferable, at least at the reference laboratory level, although the exact targets needed for clinically useful results remain to be established. The number of NTM species in the environment is probably so high that unidentifiable clinical isolates should be given a separate species status only if this is clinically meaningful. [source] |