Home About us Contact | |||
Prediction Methods (prediction + methods)
Selected AbstractsCN algorithm and long-lasting changes in reported magnitudes: the case of ItalyGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2000A. Peresan Prediction methods based on seismic precursors, and hence assuming that catalogues contain the necessary information to predict earthquakes, are sometimes criticised for their sensitivity to the unavoidable catalogue errors and possible undeclared variations in the evaluation of reported magnitudes. We consider a real example and we discuss the effect, on CN predictions, of a long-lasting underestimation of the reported magnitudes. Starting approximately in 1988, the CN functions in Central Italy evidence an anomalous behaviour, not associated with TIPs, that indicates an unusual absence of moderate events. To investigate this phenomenon, the magnitudes given in the catalogue used, which since 1980 is defined by the ING bulletins, are compared to the magnitudes reported by the global catalogue NEIC (National Earthquake Information Centre, USGS, USA) and by the regional LDG bulletins issued at the Laboratoire de Detection et de Geophysique, Bruyeres-le-Chatel, France. The comparison is performed between the ING bulletins and the NEIC catalogue, considering the local, ML,, and duration, Md,, magnitudes, first within the Central region, and then extended to the whole Italian territory. To check the consistency of the conclusions drawn from ING and NEIC data, the comparison of local magnitudes is extended to a third data set, the LDG bulletins. The differences between duration magnitudes Md that are reported by ING and NEIC since 1983 appear quite constant with time. Starting in 1987, an average underestimation of about 0.5 can be attributed to ML reported by ING for the Central region; this difference decreases to about 0.2 when the whole Italian territory is considered. The anomalous behaviour of the CN functions disappears if a magnitude correction of +0.5 is applied to ML reported in the ING bulletins. However, such a simple magnitude shift cannot restore the real features of the seismic flow, and ING bulletins are not suitable for CN algorithm application. [source] Effect of strain waveform on creep-fatigue life for Sn,8Zn,3Bi solderFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 5 2007T. YAMAMOTO ABSTRACT This paper describes the creep-fatigue life of Sn,8Zn,3Bi under push,pull loading. Creep-fatigue tests were carried out using Sn,8Zn,3Bi specimens in fast,fast, fast,slow, slow,fast, slow,slow and hold,time strain waveforms. Creep-fatigue lives in the slow,slow and hold-time waveforms showed a small reduction from the fast,fast lives but those in the slow,fast and fast,slow waveforms showed a significant reduction from the fast,fast lives. Conventional creep-fatigue life prediction methods were applied to the experimental data and the applicability of the methods was discussed. Creep-fatigue characteristics of Sn,8Zn,3Bi were compared with those of Sn,3.5Ag and Sn,37Pb. [source] Major histocompatibility complex class I binding predictions as a tool in epitope discoveryIMMUNOLOGY, Issue 3 2010Claus Lundegaard Summary Over the last decade, in silico models of the major histocompatibility complex (MHC) class I pathway have developed significantly. Before, peptide binding could only be reliably modelled for a few major human or mouse histocompatibility molecules; now, high-accuracy predictions are available for any human leucocyte antigen (HLA) -A or -B molecule with known protein sequence. Furthermore, peptide binding to MHC molecules from several non-human primates, mouse strains and other mammals can now be predicted. In this review, a number of different prediction methods are briefly explained, highlighting the most useful and historically important. Selected case stories, where these ,reverse immunology' systems have been used in actual epitope discovery, are briefly reviewed. We conclude that this new generation of epitope discovery systems has become a highly efficient tool for epitope discovery, and recommend that the less accurate prediction systems of the past be abandoned, as these are obsolete. [source] Near-optimum short-term fade prediction on satellite links at Ka and V-bandsINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 1 2008Andrew P. Chambers Abstract Several short-term predictors of rain attenuation are implemented and tested using data recorded from a satellite link in Southern England, and a comparison is made in terms of the root-mean-square error and the cumulative distribution of under-predictions. A hybrid of an autoregressive moving average and adaptive linear element predictor is created that makes use of Gauss,Newton and gradient direction coefficient updates and exhibits the best prediction error performance of all prediction methods in the majority of cases. Copyright © 2007 John Wiley & Sons, Ltd. [source] Modified methodology for computing interference in LEO satellite environmentsINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 6 2003Raúl Chávez Santiago Abstract Computing interference is very important in satellite networks design in order to assure the electromagnetic compatibility (EMC) with other radiocommunication systems. There are different methods to compute interference in geostationary (GEO) satellite systems including conventional methods using link budget equations and alternate methods such as increase in noise temperature. However, computing interference in low earth orbit (LEO) systems represents a different problem. Due to the special characteristics of this kind of orbits, the elevation angle at any site changes continuously over time, meaning a time dependent change of the propagation path length between an interfering transmitter and an interfered-with receiver, and of the discrimination provided by the transmitting and/or the receiving antenna. Thus, conventional interference prediction methods developed for fixed links must be adapted to the case of LEO systems. To overcome this problem a mathematical model that characterizes the path length variations by an average value obtained from the probability density function of the varying distance between an interfering transmitter and an interfered- with receiver is proposed in this paper. This average path length enables the use of conventional link budget methods to reduce the computation time for the evaluation of interference in LEO satellite environments. Two practical examples show the possible applications of the proposed model. Copyright © 2003 John Wiley & Sons, Ltd. [source] Potential of combined spaceborne infrared and microwave radiometry for near real-time rainfall attenuation monitoring along earth-satellite linksINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2001Frank S. Marzano Abstract The objective of this paper is to investigate how spaceborne remote sensors, and their derived products, can be exploited to optimize the performances of a satellite communication system in the presence of precipitating clouds along the path. The complementarity between sun-synchronous microwave (MW) and geo-stationary infrared (IR) radiometry for monitoring the earth's atmosphere is discussed and their potential as a rain detection system within near real-time countermeasure techniques for earth-satellite microwave links is analysed. A general approach, consisting in estimating rainfall intensity and attenuation by polar-orbiting microwave radiometers and temporally tracking the rainfall areas by geo-stationary infrared radiometers, is delineated. Multiple regression algorithms for predicting rainfall attenuation from spaceborne brightness temperatures and from surface rainrate, trained by radiative transfer and cloud models, are illustrated. A maximum likelihood technique is delineated to discriminate stratiform and convective rainfall from spaceborne brightness temperatures. The differences among attenuation estimates derived from layered raining-cloud structures with respect to those obtained from simple rain slabs, as recommended by ITU-R, are also quantified. A test of the proposed attenuation prediction methods is performed using raingage and Italsat data acquired in Spino d'Adda (Italy) during 1994. A description of the statistical method, based on the probability matching technique, adopted to combine MW and IR data for retrieving and tracking precipitating cloud systems in terms of path attenuation and accumulated rain at ground is finally provided together with its application to a case study over the Mediterranean area during October 1998. Copyright © 2001 John Wiley & Sons, Ltd. [source] N-Ace: Using solvent accessibility and physicochemical properties to identify protein N-acetylation sitesJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2010Tzong-Yi Lee Abstract Protein acetylation, which is catalyzed by acetyltransferases, is a type of post-translational modification and crucial to numerous essential biological processes, including transcriptional regulation, apoptosis, and cytokine signaling. As the experimental identification of protein acetylation sites is time consuming and laboratory intensive, several computational approaches have been developed for identifying the candidates of experimental validation. In this work, solvent accessibility and the physicochemical properties of proteins are utilized to identify acetylated alanine, glycine, lysine, methionine, serine, and threonine. A two-stage support vector machine was applied to learn the computational models with combinations of amino acid sequences, and the accessible surface area and physicochemical properties of proteins. The predictive accuracy thus achieved is 5% to 14% higher than that of models trained using only amino acid sequences. Additionally, the substrate specificity of the acetylated site was investigated in detail with reference to the subcellular colocalization of acetyltransferases and acetylated proteins. The proposed method, N-Ace, is evaluated using independent test sets in various acetylated residues and predictive accuracies of 90% were achieved, indicating that the performance of N-Ace is comparable with that of other acetylation prediction methods. N-Ace not only provides a user-friendly input/output interface but also is a creative method for predicting protein acetylation sites. This novel analytical resource is now freely available at http://N-Ace.mbc.NCTU.edu.tw/. © 2010 Wiley Periodicals, Inc. J Comput Chem, 2010 [source] Prediction of protein folding rates from primary sequences using hybrid sequence representationJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 5 2009Yingfu Jiang Abstract The ability to predict protein folding rates constitutes an important step in understanding the overall folding mechanisms. Although many of the prediction methods are structure based, successful predictions can also be obtained from the sequence. We developed a novel method called prediction of protein folding rates (PPFR), for the prediction of protein folding rates from protein sequences. PPFR implements a linear regression model for each of the mainstream folding dynamics including two-, multi-, and mixed-state proteins. The proposed method provides predictions characterized by strong correlations with the experimental folding rates, which equal 0.87 for the two- and multistate proteins and 0.82 for the mixed-state proteins, when evaluated with out-of-sample jackknife test. Based on in-sample and out-of-sample tests, the PPFR's predictions are shown to be better than most of other sequence only and structure-based predictors and complementary to the predictions of the most recent sequence-based QRSM method. We show that simultaneous incorporation of several characteristics, including the sequence, physiochemical properties of residues, and predicted secondary structure provides improved quality. This hybridized prediction model was analyzed to reveal the complementary factors that can be used in tandem to predict folding rates. We show that bigger proteins require more time for folding, higher helical and coil content and the presence of Phe, Asn, and Gln may accelerate the folding process, the inclusion of Ile, Val, Thr, and Ser may slow down the folding process, and for the two-state proteins increased ,-strand content may decelerate the folding process. Finally, PPFR provides strong correlation when predicting sequences with low similarity. © 2008 Wiley Periodicals, Inc. J Comput Chem, 2009 [source] Bootstrapping Financial Time SeriesJOURNAL OF ECONOMIC SURVEYS, Issue 3 2002Esther Ruiz It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate. [source] Forecasting market impact costs and identifying expensive tradesJOURNAL OF FORECASTING, Issue 1 2008Jacob A. Bikker Abstract Often, a relatively small group of trades causes the major part of the trading costs on an investment portfolio. Consequently, reducing the trading costs of comparatively few expensive trades would already result in substantial savings on total trading costs. Since trading costs depend to some extent on steering variables, investors can try to lower trading costs by carefully controlling these factors. As a first step in this direction, this paper focuses on the identification of expensive trades before actual trading takes place. However, forecasting market impact costs appears notoriously difficult and traditional methods fail. Therefore, we propose two alternative methods to form expectations about future trading costs. Applied to the equity trades of the world's second largest pension fund, both methods succeed in filtering out a considerable number of trades with high trading costs and substantially outperform no-skill prediction methods. Copyright © 2008 John Wiley & Sons, Ltd. [source] On a family of finite moving-average trend filters for the ends of seriesJOURNAL OF FORECASTING, Issue 2 2002Alistair G. Gray Abstract A family of finite end filters is constructed using a minimum revisions criterion and based on a local dynamic model operating within the span of a given finite central filter. These end filters are equivalent to evaluating the central filter with unavailable future observations replaced by constrained optimal linear predictions. Two prediction methods are considered: best linear unbiased prediction and best linear biased prediction where the bias is time invariant. The properties of these end filters are determined. In particular, they are compared to X-11 end filters and to the case where the central filter is evaluated with unavailable future observations predicted by global ARIMA models as in X-11-ARIMA or X-12-ARIMA. Copyright © 2002 John Wiley & Sons, Ltd. [source] Digital soil mapping using artificial neural networksJOURNAL OF PLANT NUTRITION AND SOIL SCIENCE, Issue 1 2005Thorsten Behrens Abstract In the context of a growing demand of high-resolution spatial soil information for environmental planning and modeling, fast and accurate prediction methods are needed to provide high-quality digital soil maps. Thus, this study focuses on the development of a methodology based on artificial neural networks (ANN) that is able to spatially predict soil units. Within a test area in Rhineland-Palatinate (Germany), covering an area of about 600 km2, a digital soil map was predicted. Based on feed-forward ANN with the resilient backpropagation learning algorithm, the optimal network topology was determined with one hidden layer and 15 to 30 cells depending on the soil unit to be predicted. To describe the occurrence of a soil unit and to train the ANN, 69 different terrain attributes, 53 geologic-petrographic units, and 3 types of land use were extracted from existing maps and databases. 80% of the predicted soil units (n = 33) showed training errors (mean square error) of the ANN below 0.1, 43% were even below 0.05. Validation returned a mean accuracy of over 92% for the trained network outputs. Altogether, the presented methodology based on ANN and an extended digital terrain-analysis approach is time-saving and cost effective and provides remarkable results. Digitale Bodenkartierung mithilfe von Künstlichen Neuronalen Netzen Vor dem Hintergrund einer steigenden Nachfrage nach hoch auflösenden bodenkundlichen Flächeninformationen für die Umweltplanung und Modellierung werden schnelle und genaue Vorhersagemodelle benötigt, um hochqualitative Bodenprognosekarten zur Verfügung stellen zu können. Kernpunkt der hier vorgestellten Untersuchung ist daher die Entwicklung einer Methodik zur Erstellung von Bodenprognosekarten auf der Grundlage Künstlicher Neuronaler Netze (KNN). Als Untersuchungsgebiet diente eine Fläche von über 600 km2 im Pfälzer Wald. Vorwärts propagierende KNN auf Basis des "Resilent Backpropagation"-Algorithmus mit einer verdeckten Schicht aus 15 bis 30 Zellen erwiesen sich als optimal für die Prognose von Bodenformengesellschaften. Um das Auftreten einer Bodenformengesellschaft zu beschreiben und die KNN zu trainieren, wurden 69 Reliefparameter, 3 Nutzungsklassen sowie 53 geologisch-petrographische Einheiten verwendet. 80,% der vorhergesagten Bodenformengesellschaften (n = 33) zeigten Trainingsfehler (mittlerer quadratischer Fehler der KNN) von unter 0,1; 43,% sogar von unter 0,05. Die Validierung ergab Genauigkeiten in dem kartierten Gesamtraum von durchschnittlich über 92,% für die prognostizierten Bodenformengesellschaften. Zusammenfassend kann festgehalten werden, dass die vorgestellte Methodik auf der Basis von KNN und einer umfangreichen Digitalen Reliefanalyse einen zeit- und kosteneffektiven Ansatz zur Prognose von Bodenkarten darstellt, der hervorragende Ergebnisse liefern kann. [source] Prediction of residues in discontinuous B-cell epitopes using protein 3D structuresPROTEIN SCIENCE, Issue 11 2006Pernille Haste Andersen Abstract Discovery of discontinuous B-cell epitopes is a major challenge in vaccine design. Previous epitope prediction methods have mostly been based on protein sequences and are not very effective. Here, we present DiscoTope, a novel method for discontinuous epitope prediction that uses protein three-dimensional structural data. The method is based on amino acid statistics, spatial information, and surface accessibility in a compiled data set of discontinuous epitopes determined by X-ray crystallography of antibody/antigen protein complexes. DiscoTope is the first method to focus explicitly on discontinuous epitopes. We show that the new structure-based method has a better performance for predicting residues of discontinuous epitopes than methods based solely on sequence information, and that it can successfully predict epitope residues that have been identified by different techniques. DiscoTope detects 15.5% of residues located in discontinuous epitopes with a specificity of 95%. At this level of specificity, the conventional Parker hydrophilicity scale for predicting linear B-cell epitopes identifies only 11.0% of residues located in discontinuous epitopes. Predictions by the DiscoTope method can guide experimental epitope mapping in both rational vaccine design and development of diagnostic tools, and may lead to more efficient epitope identification. [source] Long membrane helices and short loops predicted less accuratelyPROTEIN SCIENCE, Issue 12 2002Chien Peter Chen 3D, three-dimensional; DSSP, program assigning secondary structure (Kabsch and Sander 1983); HMM, hidden Markov model; PDB, Protein Data Bank of experimentally determined 3D structures of proteins (Bernstein et al. 1977; Berman et al. 2000); SWISS-PROT, database of protein sequences (Bairoch and Apweiler 2000); TM, transmembrane; TMH, transmembrane helix Abstract Low-resolution experiments suggest that most membrane helices span over 17,25 residues and that most loops between two helices are longer than 15 residues. Both constraints have been used explicitly in the development of prediction methods. Here, we compared the largest possible sequence,unique data sets from high- and low-resolution experiments. For the high-resolution data, we found that only half of the helices fall into the expected length interval and that half of the loops were shorter than 10 residues. We compared the accuracy of detecting short loops and long helices for 28 advanced and simple prediction methods: All methods predicted short loops less accurately than longer ones. In particular, loops shorter than 7 residues appeared to be very difficult to detect by current methods. Similarly, all methods tended to be more accurate for longer than for shorter helices. However, helices with more than 32 residues were predicted less accurately than all other helices. Our findings may suggest particular strategies for improving predictions of membrane helices. [source] Prediction of partial membrane protein topologies using a consensus approachPROTEIN SCIENCE, Issue 12 2002Johan Nilsson PCT, partial consensus topology; TMH, transmembrane helix Abstract We have developed a method to reliably identify partial membrane protein topologies using the consensus of five topology prediction methods. When evaluated on a test set of experimentally characterized proteins, we find that approximately 90% of the partial consensus topologies are correctly predicted in membrane proteins from prokaryotic as well as eukaryotic organisms. Whole-genome analysis reveals that a reliable partial consensus topology can be predicted for ,70% of all membrane proteins in a typical bacterial genome and for ,55% of all membrane proteins in a typical eukaryotic genome. The average fraction of sequence length covered by a partial consensus topology is 44% for the prokaryotic proteins and 17% for the eukaryotic proteins in our test set, and similar numbers are found when the algorithm is applied to whole genomes. Reliably predicted partial topologies may simplify experimental determinations of membrane protein topology. [source] Genomic-scale comparison of sequence- and structure-based methods of function prediction: Does structure provide additional insight?PROTEIN SCIENCE, Issue 5 2001Jacquelyn S. Fetrow Abstract A function annotation method using the sequence-to-structure-to-function paradigm is applied to the identification of all disulfide oxidoreductases in the Saccharomyces cerevisiae genome. The method identifies 27 sequences as potential disulfide oxidoreductases. All previously known thioredoxins, glutaredoxins, and disulfide isomerases are correctly identified. Three of the 27 predictions are probable false-positives. Three novel predictions, which subsequently have been experimentally validated, are presented. Two additional novel predictions suggest a disulfide oxidoreductase regulatory mechanism for two subunits (OST3 and OST6) of the yeast oligosaccharyltransferase complex. Based on homology, this prediction can be extended to a potential tumor suppressor gene, N33, in humans, whose biochemical function was not previously known. Attempts to obtain a folded, active N33 construct to test the prediction were unsuccessful. The results show that structure prediction coupled with biochemically relevant structural motifs is a powerful method for the function annotation of genome sequences and can provide more detailed, robust predictions than function prediction methods that rely on sequence comparison alone. [source] Cascaded multiple classifiers for secondary structure predictionPROTEIN SCIENCE, Issue 6 2000Mohammed Ouali Abstract We describe a new classifier for protein secondary structure prediction that is formed by cascading together different types of classifiers using neural networks and linear discrimination. The new classifier achieves an accuracy of 76.7% (assessed by a rigorous full Jack-knife procedure) on a new nonredundant dataset of 496 nonhomologous sequences (obtained from G.J. Barton and JA. Cuff). This database was especially designed to train and test protein secondary structure prediction methods, and it uses a more stringent definition of homologous sequence than in previous studies. We show that it is possible to design classifiers that can highly discriminate the three classes (H, E, C) with an accuracy of up to 78% for ,-strands, using only a local window and resampling techniques. This indicates that the importance of long-range interactions for the prediction of ,-strands has been probably previously overestimated. [source] Noncanonical conformation of CDR L1 in the anti-IL-23 antibody CNTO4088ACTA CRYSTALLOGRAPHICA SECTION F (ELECTRONIC), Issue 3 2010Alexey Teplyakov CNTO4088 is a monoclonal antibody to human IL-23. The X-ray structure of the Fab fragment revealed an unusual noncanonical conformation of CDR L1. Most antibodies with the , light chain exhibit a canonical structure for CDR L1 in which residue 29 anchors the CDR loop to the framework. Analysis of the residues believed to define the conformation of CDR L1 did not explain why it should not adopt a canonical conformation in this antibody. This makes CNTO4088 a benchmark case for developing prediction methods and structure-modeling tools. [source] Application of statistical potentials to protein structure refinement from low resolution ab initio modelsBIOPOLYMERS, Issue 4 2003Hui Lu Abstract Recently ab initio protein structure prediction methods have advanced sufficiently so that they often assemble the correct low resolution structure of the protein. To enhance the speed of conformational search, many ab initio prediction programs adopt a reduced protein representation. However, for drug design purposes, better quality structures are probably needed. To achieve this refinement, it is natural to use a more detailed heavy atom representation. Here, as opposed to costly implicit or explicit solvent molecular dynamics simulations, knowledge-based heavy atom pair potentials were employed. By way of illustration, we tried to improve the quality of the predicted structures obtained from the ab initio prediction program TOUCHSTONE by three methods: local constraint refinement, reduced predicted tertiary contact refinement, and statistical pair potential guided molecular dynamics. Sixty-seven predicted structures from 30 small proteins (less than 150 residues in length) representing different structural classes (,, ,, ,,/,) were examined. In 33 cases, the root mean square deviation (RMSD) from native structures improved by more than 0.3 Å; in 19 cases, the improvement was more than 0.5 Å, and sometimes as large as 1 Å. In only seven (four) cases did the refinement procedure increase the RMSD by more than 0.3 (0.5) Å. For the remaining structures, the refinement procedures changed the structures by less than 0.3 Å. While modest, the performance of the current refinement methods is better than the published refinement results obtained using standard molecular dynamics. © 2003 Wiley Periodicals, Inc. Biopolymers 70: 575,584, 2003 [source] |