Testing Procedure (testing + procedure)

Distribution by Scientific Domains

Kinds of Testing Procedure

  • hypothesis testing procedure
  • multiple testing procedure


  • Selected Abstracts


    Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    BIOMETRICAL JOURNAL, Issue 5 2008
    Sandrine Dudoit
    Abstract This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP (q,g) = Pr(g (Vn,Sn) > q), and generalized expected value (gEV) error rates, gEV (g) = E [g (Vn,Sn)], for arbitrary functions g (Vn,Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g (Vn,Sn) = Vn /(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E [Vn /(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Multiple Testing Procedures with Applications to Genomics by DUDOIT, S. and van der LAAN, M. J.

    BIOMETRICS, Issue 1 2009
    Ruth HellerArticle first published online: 17 MAR 200
    No abstract is available for this article. [source]


    Testing procedure to obtain reliable potentiodynamic polarization curves on type 310S stainless steel in alkali carbonate melts

    MATERIALS AND CORROSION/WERKSTOFFE UND KORROSION, Issue 4 2006
    S. Frangini
    Abstract Potentiodynamic polarization measurements have been employed to evaluate the anodic behavior of a type 310S stainless steel in the eutectic Li/K molten carbonate. In general, the electrochemical tests yield useful information to predict the stability of the oxide films formed on the surface at the initial period of corrosion, although some precaution is required in the testing procedure as the reproducibility of results is seen to be adversely affected by the passage of large currents. Especially when the steel is in a passive state, erratic results are easily observed if the corrosion layer is being damaged by uncontrolled large currents. This is because the acid-base properties of the melt are susceptible to deep changes by applied currents in the milli-ampere range resulting in hysteresis phenomena in the polarization plot. Hysteresis is caused, on one hand, by acidic dissolution of the passive layer at high anodic currents and, on the other hand, by increased melt basicity due to oxide ion build-up at high cathodic currents. An optimized testing procedure is therefore suggested that minimizes these effects by imposing a 2 mA/cm2 threshold current during polarization measurements. Moreover, the conditions for the applicability of the linear polarization technique to estimate kinetic parameters have been discussed in relationship with the corrosion mechanisms analysed by impedance spectra. It is concluded that the presence of diffusional impedance terms and formation of surface resistive films in molten carbonates may result in not reliable polarization resistance values obtained with the linear polarization. [source]


    Selection of Antiepileptic Drug Polytherapy Based on Mechanisms of Action: The Evidence Reviewed

    EPILEPSIA, Issue 11 2000
    Charles L. P. Deckers
    Summary: Purpose: When monotherapy with antiepileptic drugs (AEDs) fails, combination therapy is tried in an attempt to improve effectiveness by improving efficacy, tolerability, or both. We reviewed the available studies (both animal and human) on AED polytherapy to determine whether AEDs can be selected for combination therapy based on their mechanisms of action, and if so, which combinations are associated with increased effectiveness. Because various designs and methods of analysis were used in these studies, it was also necessary to evaluate the appropriateness of these approaches. Methods: Published papers reporting on AED polytherapy in animals or humans were identified by Medline search and by checking references cited in these papers. Results: Thirty-nine papers were identified reporting on two-drug AED combinations. Several combinations were reported to offer improved effectiveness, but no uniform approach was used in either animal or human studies for the evaluation of pharmacodynamic drug interactions; efficacy was often the only end point. Conclusions: There is evidence that AED polytherapy based on mechanisms of action may enhance effectiveness. In particular, combining a sodium channel blocker with a drug enhancing GABAergic inhibition appears to be advantageous. Combining two GABA mimetic drugs or combining an AMPA antagonist with an NMDA antagonist may enhance efficacy, but tolerability is sometimes reduced. Combining two sodium channel blockers seems less promising. However, given the incomplete knowledge of the pathophysiology of seizures and indeed of the exact mechanisms of action of AEDs, an empirical but rational approach for evaluating AED combinations is of fundamental importance. This would involve appropriate testing of all possible combinations in animal models and subsequent evaluation of advantageous combinations in clinical trials. Testing procedures in animals should include the isobologram method, and the concept of drug load should be the basis of studies in patients with epilepsy. [source]


    Seismic performance of a 3D full-scale high-ductility steel,concrete composite moment-resisting structure,Part I: Design and testing procedure

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 14 2008
    A. Braconi
    Abstract A multi-level pseudo-dynamic (PSD) seismic test programme was performed on a full-scale three-bay two-storey steel,concrete composite moment-resisting frame built with partially encased composite columns and partial-strength connections. The system was designed to provide strength and ductility for earthquake resistance with energy dissipation located in ductile components of beam-to-column joints including flexural yielding of beam end-plates and shear yielding of the column web panel zone. In addition, the response of the frame depending on the column base yielding was analysed. Firstly, the design of the test structure is presented in the paper, with particular emphasis on the ductile detailing of beam-to-column joints. Details of the construction of the test structure and the test set-up are also given. The paper then provides a description of the non-linear static and dynamic analytical studies that were carried out to preliminary assess the seismic performance of the test structure and establish a comprehensive multi-level PSD seismic test programme. The resulting test protocol included the application of a spectrum-compatible earthquake ground motion scaled to four different peak ground acceleration levels to reproduce an elastic response as well as serviceability, ultimate, and collapse limit state conditions, respectively. Severe damage to the building was finally induced by a cyclic test with stepwise increasing displacement amplitudes. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Forced vibration testing of buildings using the linear shaker seismic simulation (LSSS) testing method

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 7 2005
    Eunjong Yu
    Abstract This paper describes the development and numerical verification of a test method to realistically simulate the seismic structural response of full-scale buildings. The result is a new field testing procedure referred to as the linear shaker seismic simulation (LSSS) testing method. This test method uses a linear shaker system in which a mass mounted on the structure is commanded a specified acceleration time history, which in turn induces inertial forces in the structure. The inertia force of the moving mass is transferred as dynamic force excitation to the structure. The key issues associated with the LSSS method are (1) determining for a given ground motion displacement, xg, a linear shaker motion which induces a structural response that matches as closely as possible the response of the building if it had been excited at its base by xg (i.e. the motion transformation problem) and (2) correcting the linear shaker motion from Step (1) to compensate for control,structure interaction effects associated with the fact that linear shaker systems cannot impart perfectly to the structure the specified forcing functions (i.e. the CSI problem). The motion transformation problem is solved using filters that modify xg both in the frequency domain using building transfer functions and in the time domain using a least squares approximation. The CSI problem, which is most important near the modal frequencies of the structural system, is solved for the example of a linear shaker system that is part of the NEES@UCLA equipment site. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Toxicity assessment of reference and natural freshwater sediments with the LuminoTox assay

    ENVIRONMENTAL TOXICOLOGY, Issue 4 2006
    P. M. Dellamatrice
    Abstract We examined the possibility of adapting the LuminoTox, a recently-commercialized bioanalytical testing procedure initially developed for aqueous samples, to assess the toxic potential of sediments. This portable fluorescent biosensor uses photosynthetic enzyme complexes (PECs) to rapidly measure photosynthetic efficiency. LuminoTox testing of 14 CRM (Certified Reference Material) sediments was first undertaken with (1) a "solid phase assay" (Lum-SPA) in which PECs are in intimate contact with sediment slurries for a 15 min exposure period and (2) an elutriate assay (Lum-ELU) in which PECs are exposed for 15 min to sediment water elutriates. CRM sediment toxicity data were then compared with those generated with the Microtox Solid Phase Assay (Mic-SPA). A significant correlation (P < 0.05) was shown to exist between Lum-SPA and Mic-SPA, indicating that both tests display a similar toxicity response pattern for CRM sediments having differing contaminant profiles. The sediment elutriate Lum-ELU assay displayed toxicity responses (i.e. measurable IC20s) for eight of the 14 CRM sediments, suggesting that it is capable of determining the presence of sediment contaminants that are readily soluble in an aqueous elutriate. Lum-SPA and Mic-SPA bioassays were further conducted on 12 natural freshwater sediments and their toxicity responses were more weakly, yet significantly, correlated. Finally, Lum-SPA testing undertaken with increasing mixtures of kaolin clay confirmed that its toxicity responses, in a manner similar to those reported for the Mic-SPA assay, are also subject to the influence of grain size. While further studies will be required to more fully understand the relationship between Lum-SPA assay responses and the physicochemical makeup of sediments (e.g., grain size, combined presence of natural and anthropogenic contaminants), these preliminary results suggest that LuminoTox testing could be a useful screen to assess the toxic potential of solid media. © 2006 Wiley Periodicals, Inc. Environ Toxicol 21: 395,402, 2006. [source]


    Methodology for Thermomechanical Simulation and Validation of Mechanical Weld-Seam Properties,

    ADVANCED ENGINEERING MATERIALS, Issue 3 2010
    Wolfgans Bleck
    A simulation and validation of the mechanical properties in submerged-arc-weld seams is presented, which combines numerical simulation of the thermal cycle in the weld using the SimWeld software with an annealing and testing procedure. The weld-seam geometry and thermal profile near the weld seam can be computed based on the simulation of an equivalent heat source describing the energy input and distribution in the weld seam. Defined temperature,time cycles are imposed on tensile specimens allowing for annealing experiments with fast cooling rates. The direct evaluation of welded structures and the simple generation of input data for mechanical simulations in FE software packages are possible. [source]


    Adapting the logical basis of tests for Hardy-Weinberg Equilibrium to the real needs of association studies in human and medical genetics

    GENETIC EPIDEMIOLOGY, Issue 7 2009
    Katrina A. B. Goddard
    Abstract The standard procedure to assess genetic equilibrium is a ,2 test of goodness-of-fit. As is the case with any statistical procedure of that type, the null hypothesis is that the distribution underlying the data is in agreement with the model. Thus, a significant result indicates incompatibility of the observed data with the model, which is clearly at variance with the aim in the majority of applications: to exclude the existence of gross violations of the equilibrium condition. In current practice, we try to avoid this basic logical difficulty by increasing the significance bound to the P -value (e.g. from 5 to 10%) and inferring compatibility of the data with Hardy Weinberg Equilibrium (HWE) from an insignificant result. Unfortunately, such direct inversion of a statistical testing procedure fails to produce a valid test of the hypothesis of interest, namely, that the data are in sufficiently good agreement with the model under which the P -value is calculated. We present a logically unflawed solution to the problem of establishing (approximate) compatibility of an observed genotype distribution with HWE. The test is available in one- and two-sided versions. For both versions, we provide tools for exact power calculation. We demonstrate the merits of the new approach through comparison with the traditional ,2 goodness-of-fit test in 2×60 genotype distributions from 43 published genetic studies of complex diseases where departure from HWE was noted in either the case or control sample. In addition, we show that the new test is useful for the analysis of genome-wide association studies. Genet. Epidemiol. 33:569,580, 2009. © 2009 Wiley-Liss, Inc. [source]


    Impact of Grain Size on the Cerchar Abrasiveness Test

    GEOMECHANICS AND TUNNELLING, Issue 1 2008
    Klaus Lassnig Mag.
    The Cerchar abrasiveness test is a common testing procedure for the prediction of tool wear but consistent and detailed recommendations of the testing procedure are inexistent until now. One point of disagreement is the required number of scratch tests per sample to obtain reliable results depending on the grain size of the samples. The focus of this work was to verify the influence of grain size on the number of required single examinations per sample. Grain size analyses were performed to get sum-curves of each tested rock sample. From the grain size data the median and the interquartile range of the grain sizes were calculated. CAI values after 5 and after 10 scratch tests were compared with the median and the interquartile-range of the grain size. No grain size dependency of the CAI deviation between 5 and 10 tests in the analysed range was observed. Einfluss der Korngröße auf den Cerchar Abrasivitätstest Der Cerchar-Abrasivitätstest ist ein häufig verwendeter Indextest zur Ermittlung der Abrasivität von Gesteinen gegenüber Bohrwerkzeugen. Bis jetzt existieren keine einheitlichen und detaillierten Empfehlungen für die Durchführung des Tests. Insbesondere gilt das für die Anzahl der durchzuführenden Tests in Abhängigkeit von der Korngröße der Gesteine. Es existiert lediglich ein Empfehlung, wonach bei grobkörnigen Gesteinen zehn anstatt der sonst üblichen fünf Tests durchzuführen seien. In dieser Arbeit wird der Einfluss der Korngröße auf das Testergebnis in Abhängigkeit von der Anzahl der Tests untersucht. Dazu wurden an den getesteten Proben die Korngrößen bestimmt. Von den Korngrößendaten wurden die statistische Parameter Median und Interquartile-range, berechnet. Die CAI Ergebnisse nach fünf Ritztests und nach zehn Ritztest wurden dann mit dem Median und dem Interquartile-range der Korngrößen verglichen. Im untersuchten Korngrößenbereich wurde kein Einfluss der Korngröße auf die Differenzwerte von fünf und zehn Tests beobachtet. Daraus kann abgeleitet werden, dass im untersuchten Korngrößenbereich die Korngröße , entgegen den bisherigen Annahmen , keinen messbaren Einfluss auf das Ergebnis des CAI-Tests hat. [source]


    Testing Option Pricing Models with Stochastic Volatility, Random Jumps and Stochastic Interest Rates

    INTERNATIONAL REVIEW OF FINANCE, Issue 3-4 2002
    George J. Jiang
    In this paper, we propose a parsimonious GMM estimation and testing procedure for continuous-time option pricing models with stochastic volatility, random jump and stochastic interest rate. Statistical tests are performed on both the underlying asset return model and the risk-neutral option pricing model. Firstly, the underlying asset return models are estimated using GMM with valid statistical tests for model specification. Secondly, the preference related parameters in the risk-neutral distribution are estimated from observed option prices. Our findings confirm that the implied risk premiums for stochastic volatility, random jump and interest rate are overall positive and varying over time. However, the estimated risk-neutral processes are not unique, suggesting a segmented option market. In particular, the deep ITM call (or deep OTM put) options are clearly priced with higher risk premiums than the deep OTM call (or deep ITM put) options. Finally, while stochastic volatility tends to better price long-term options, random jump tends to price the short-term options better, and option pricing based on multiple risk-neutral distributions significantly outperforms that based on a single risk-neutral distribution. [source]


    Two-dimensional psychophysics in chickens and humans: Comparative aspects of perceptual relativity

    JAPANESE PSYCHOLOGICAL RESEARCH, Issue 4 2008
    PETRA HAUF
    Abstract:, Whereas the contextual basis of psychophysical responding is well founded, the compound influence of sensory and perceptual frames of reference constitutes a challenging issue in comparative one- and multidimensional psychophysics (e.g., Sarris, 2004, 2006). We refer to previous investigations, which tested the assumption that the chicken's relational choice in the one-dimensional case is systematically altered by context conditions similar to the findings stemming from human participants. In this paper mainly the context-dependent stimulus coding was investigated for the important, but largely neglected, two-dimensional case in humans and chickens. Three strategies were predicted for the generalization of size discriminations, which had been learned in a different color context. In two experiments, which varied in the testing procedure, both species demonstrated profound contextual effects in psychophysics; they differed, however, in the way the information from either dimension was used: Chickens throughout used color as a cue to separate the respective size discriminations and generalizations. Whereas humans predominantly generalized according to size information only or according to absolute stimulus properties, the chickens showed some important species-specific differences. Common and heterogeneous findings of this line of comparative research in multidimensional psychophysics are presented and discussed in various ways. [source]


    Identification of Acipenseriformes species in trade

    JOURNAL OF APPLIED ICHTHYOLOGY, Issue 2008
    A. Ludwig
    Summary Sturgeons and paddlefishes (Acipenseridae) are highly endangered freshwater fishes. Their eggs (sold as caviar) are one of the most valuable wildlife products in international trade. Concerns of overharvesting and the conservation status of many of the 27 extant species of Acipenseriformes led to all species being included on the CITES Appendices in 1998. Since then international trade in all products and parts from sturgeon and paddlefish has been regulated. However, despite the controls on trade, unsustainable harvesting continues to threaten many populations. Illegal fishing and trade continues to be a threat to the management of these fish. To enforce the regulation of legal trade and prevention of illegal trade, the development of a uniform identification system for parts and derivates of Acipenseriformes has been identified as an urgent requirement. Ideally this system should be suitable for (i) identification at the species-level of caviar and other products from Acipenseriformes; (ii) population identification; (iii) source identification (wild vs aquaculture); and (iv) determining the age of caviar because strict timeframes govern its international trade. This paper reviews the techniques currently available and their potential to be used in an identification system for Acipenseriformes species and their products in trade. A review of all available identification techniques has shown that there is not a single method that can meet all requirements (see i,iv), and it does not appear to be feasible to develop such a method in the near future therefore the most appropriate methods need to be developed for each. Considering the advantages and disadvantages of all techniques reviewed in this document, the following conclusions can be drawn: (i) for the identification of species, approaches are recommended that target mitochondrial cytochrome b sequences (RFLP, nested PCR or direct sequencing). However, they show limitations for the detection of hybrids (although natural hybrids are rare, the number of artificially produced hybrids in aquaculture is increasing) and for the differentiation of the following closely related species complexes: Acipenser gueldenstaedti,Acipenser baerii,Acipenser persicus,Acipenser naccarii; Acipenser medirostris,Acipenser mikadoi; and Scaphirhynchus albus,Scaphirhynchus plathorhynchus,Scaphirhynchus suttkusi; (ii) the identification of different populations of the same species is currently not feasible because genetic data are incomplete for most populations, and stocking and release programmes, which have become more and more common, often result in a mixture of phenotypes and genotypes, thereby impeding the creation and application of such a population identification system; (iii) source identification based on genetic approaches can be excluded at present because there are no genetic differences between wild and hatchery-raised fish. This is the result of the continuing restocking of natural populations with captive fish and vice versa. However, because rearing (i.e. environmental) conditions are different, methods focusing on differences in water quality or food seem to be more appropriate (for example differences in fatty acid composition). So far, very few studies have been conducted and therefore, source identification methods merit further exploration; and (iv) the age of a product in trade cannot be detected by DNA-based methods and protein profiling is undoubtedly impractical due to hard-to-perform, labour-and cost-intensive methods, which are highly susceptible to protein degradation. Arising from the limits discussed above, the next steps in the development of a uniform sturgeon identification system are proposed to be the following: (i) designation of qualified reference laboratories at national levels in (re-) exporting and importing countries. These should be approved through a standardized testing procedure, for instance a ring test on blind samples. Registered laboratories should be published and disseminated and their accreditations should be subject to certain guarantees regarding quality, economic independence and scientific rigour. Operational procedures have to be determined and standardized among reference laboratories; (ii) establishment of reference collections that are accessible to the reference laboratories containing DNA analyses results and information on the location and availability of tissue samples. This is highly recommended as an important step towards a population identification system and indispensable for a general species identification system; (iii) creation of a website access to the reference collections containing the reference database information about genetic samples, comparable to NCBI, which provides background data: sample location; population information; citation; available genetic data; location of archival storage; currently treated and distributed caviar and status of analysis. This website should also be a forum for the exchange of knowledge on and experiences with identification systems, species and population status information, relevant scientific research, etc.; and (iv) the outcome of the trade identification tests should be made available to the reference laboratories for future reference. The universal caviar labelling system could incorporate an indication of the verification of the consignment. In view of the lack of knowledge and the great need to develop a uniform identification system for Acipenseriformes with regard to the importance of the international caviar trade, further scientific guidance and appropriate research is strongly recommended. Progress should be assessed and exchanged on a regular basis. [source]


    Structural Changes in Expected Stock Returns Relationships: Evidence from ASE

    JOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 9-10 2006
    Evangelos Karanikas
    Abstract:, This paper suggests a recursive application of Fama and MacBeth's (1973) testing procedure to assess the significance of macroeconomic factors and firm-specific effects priced in explaining the cross-sectional variation of expected stock returns over time. The paper applies the suggested testing procedure to investigate the source of risks of the Athens Stock Exchange (ASE). Among the variables examined, it finds out that the changes in the short term interest rates and firm size can explain a significant proportion of the variation of the ASE individual returns. The paper argues that the significance of interest rate changes can be associated with monetary policy changes introduced by the Greek authorities after the mid-nineties. These changes were focused on targeting interest rates, instead of monetary aggregates. [source]


    Evaluating effectiveness of preoperative testing procedure: some notes on modelling strategies in multi-centre surveys

    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 1 2008
    Dario Gregori PhD
    Abstract Rationale, In technology assessment in health-related fields the construction of a model for interpreting the economic implications of the introduction of a technology is only a part of the problem. The most important part is often the formulation of a model that can be used for selecting patients to submit to the new cost-saving procedure or medical strategy. The model is usually complicated by the fact that data are often non-homogeneous with respect to some uncontrolled variables and are correlated. The most typical example is the so-called hospital effect in multi-centre studies. Aims and objectives, We show the implications derived by different choices in modelling strategies when evaluating the usefulness of preoperative chest radiography, an exam performed before surgery, usually with the aim to detect unsuspected abnormalities that could influence the anaesthetic management and/or surgical plan. Method, We analyze the data from a multi-centre study including more than 7000 patients. We use about 6000 patients to fit regression models using both a population averaged and a subject-specific approach. We explore the limitations of these models when used for predictive purposes using a validation set of more than 1000 patients. Results, We show the importance of taking into account the heterogeneity among observations and the correlation structure of the data and propose an approach for integrating a population-averaged and subject specific approach into a single modeling strategy. We find that the hospital represents an important variable causing heterogeneity that influences the probability of a useful POCR. Conclusions, We find that starting with a marginal model, evaluating the shrinkage effect and eventually move to a more detailed model for the heterogeneity is preferable. This kind of flexible approach seems to be more informative at various phases of the model-building strategy. [source]


    Effect of Implant Angulation upon Retention of Overdenture Attachments

    JOURNAL OF PROSTHODONTICS, Issue 1 2005
    MSEd, MSciDent, Michael P. Gulizio DMD
    Introduction: Overdentures supported and retained by endosteal implants depend upon mechanical components to provide retention. Ball attachments are frequently described because of simplicity and low cost, but retentive capacity of these components may be altered by a lack of implant parallelism. Purpose: The aim of this in vitro study was to investigate the retention of gold and titanium overdenture attachments when placed on ball abutments positioned off-axis. Methods and Materials: Four ball abutments were hand-tightened onto ITI dental implants and placed in an aluminum fixture that allowed positioning of the implants at 0°, 10°, 20°, and 30° from a vertical reference axis. Gold and titanium matrices were then coupled to the ball abutments at various angles and then subjected to pull tests at a rate of 2 mm/second; the peak loads of release (maximum dislodging forces) were recorded and subjected to statistical analyses. A balanced and randomized factorial experimental design testing procedure was implemented. Results: Statistically significant differences in retention of gold matrices were noted when ball abutments were positioned at 20° and 30°, but not at 0° and 10°. Statistically significant differences were noted among the titanium matrices employed for the testing procedure, as well as for the 4 ball abutments tested. Angle was not a factor affecting retention for titanium matrices. Conclusions: (1) The gold matrices employed for the testing procedures exhibited consistent values in retention compared to titanium matrices, which exhibited large variability in retention. (2) Angle had an effect upon the retention of gold matrices, but not for titanium matrices. [source]


    A THURSTONE-COOMBS MODEL OF CONCURRENT RATINGS WITH SENSORY AND LIKING DIMENSIONS

    JOURNAL OF SENSORY STUDIES, Issue 1 2002
    F. GREGORY ASHBY
    ABSTRACT A popular product testing procedure is to obtain sensory intensity and liking ratings from the same consumers. Consumers are instructed to attend to the sensory attribute, such as sweetness, when generating their liking response. We propose a new model of this concurrent ratings task that conjoins a unidimensional Thurstonian model of the ratings on the sensory dimension with a probabilistic version of Coombs' (1964) unfolding model for the liking dimension. The model assumes that the sensory characteristic of the product has a normal distribution over consumers. An individual consumer selects a sensory rating by comparing the perceived value on the sensory dimension to a set of criteria that partitions the axis into intervals. Each value on the rating scale is associated with a unique interval. To rate liking, the consumer imagines an ideal product, then computes the discrepancy or distance between the product as perceived by the consumer and this imagined ideal. A set of criteria are constructed on this discrepancy dimension that partition the axis into intervals. Each interval is associated with a unique liking rating. The ideal product is assumed to have a univariate normal distribution over consumers on the sensory attribute evaluated. The model is shown to account for 94.2% of the variance in a set of sample data and to fit this data significantly better than a bivariate normal model of the data (concurrent ratings, Thurstonian scaling, Coombs' unfolding model, sensory and liking ratings). [source]


    Tests for cycling in a signalling pathway

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 4 2004
    T. G. Müller
    Summary., Cellular signalling pathways, mediating receptor activity to nuclear gene activation, are generally regarded as feed forward cascades. We analyse measured data of a partially observed signalling pathway and address the question of possible feed-back cycling of involved biochemical components between the nucleus and cytoplasm. First we address the question of cycling in general, starting from basic assumptions about the system. We reformulate the problem as a statistical test leading to likelihood ratio tests under non-standard conditions. We find that the modelling approach without cycling is rejected. Afterwards, to differentiate two different transport mechanisms within the nucleus, we derive the appropriate dynamical models which lead to two systems of ordinary differential equations. To compare both models we apply a statistical testing procedure that is based on bootstrap distributions. We find that one of both transport mechanisms leads to a dynamical model which is rejected whereas the other model is satisfactory. [source]


    Clinical trials: healing of erosive oesophagitis with dexlansoprazole MR, a proton pump inhibitor with a novel dual delayed-release formulation , results from two randomized controlled studies

    ALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 7 2009
    P. SHARMA
    Summary Background, Dexlansoprazole MR employs a dual delayed-release delivery system that extends drug exposure and prolongs pH control compared with lansoprazole. Aim, To assess the efficacy and safety of dexlansoprazole MR in healing erosive oesophagitis (EO). Methods, Patients in two identical double-blind, randomized controlled trials (n = 4092) received dexlansoprazole MR 60 or 90 mg or lansoprazole 30 mg once daily. Week 8 healing was assessed using a closed testing procedure , first for non-inferiority, then superiority, vs. lansoprazole. Secondary endpoints included week 4 healing and week 8 healing in patients with moderate-to-severe disease (Los Angeles Classification grades C and D). Life-table and crude rate analyses were performed. Symptoms and tolerability were assessed. Results, Dexlansoprazole MR achieved non-inferiority to lansoprazole, allowing testing for superiority. Using life-table analysis, dexlansoprazole MR healed 92,95% of patients in individual studies vs. 86,92% for lansoprazole; the differences were not statistically significant (P > 0.025). Using crude rate analysis, dexlansoprazole MR 90 mg was superior to lansoprazole in both studies and 60 mg was superior in one study. Week 4 healing was >64% with all treatments in both studies. In an integrated analysis of 8-week healing in patients with moderate-to-severe EO, dexlansoprazole MR 90 mg was superior to lansoprazole. All treatments effectively relieved symptoms and were well tolerated. Conclusion, Dexlansoprazole MR is highly effective in healing EO and offers benefits over lansoprazole, particularly in moderate-to-severe disease. [source]


    Laboratory fretting tests with thin wire specimens

    LUBRICATION SCIENCE, Issue 2 2007
    M.A. Urchegui
    Abstract Wire ropes, due to their construction and application, are prone to fretting damages. In order to know the wear behaviour of individual wires under fretting conditions, laboratory tests are required. The present work describes the preliminary fretting tests accomplished with thin steel wires to optimise the testing procedure. The tests were carried out with ,crossed-cylinders' configuration varying the stroke and normal load. Afterwards, the fretted surfaces were characterised by means of an optical and scanning electron microscope, and a diamond stylus. No significant influence of selected parameters was detected and a good correlation was proved for on-line measured parameters and off-line obtained values. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Testing procedure to obtain reliable potentiodynamic polarization curves on type 310S stainless steel in alkali carbonate melts

    MATERIALS AND CORROSION/WERKSTOFFE UND KORROSION, Issue 4 2006
    S. Frangini
    Abstract Potentiodynamic polarization measurements have been employed to evaluate the anodic behavior of a type 310S stainless steel in the eutectic Li/K molten carbonate. In general, the electrochemical tests yield useful information to predict the stability of the oxide films formed on the surface at the initial period of corrosion, although some precaution is required in the testing procedure as the reproducibility of results is seen to be adversely affected by the passage of large currents. Especially when the steel is in a passive state, erratic results are easily observed if the corrosion layer is being damaged by uncontrolled large currents. This is because the acid-base properties of the melt are susceptible to deep changes by applied currents in the milli-ampere range resulting in hysteresis phenomena in the polarization plot. Hysteresis is caused, on one hand, by acidic dissolution of the passive layer at high anodic currents and, on the other hand, by increased melt basicity due to oxide ion build-up at high cathodic currents. An optimized testing procedure is therefore suggested that minimizes these effects by imposing a 2 mA/cm2 threshold current during polarization measurements. Moreover, the conditions for the applicability of the linear polarization technique to estimate kinetic parameters have been discussed in relationship with the corrosion mechanisms analysed by impedance spectra. It is concluded that the presence of diffusional impedance terms and formation of surface resistive films in molten carbonates may result in not reliable polarization resistance values obtained with the linear polarization. [source]


    Verfahren zur vollständigen Ermittlung der R-Abhängigkeit des Rissausbreitungsverhaltens mit nur einer Probe,

    MATERIALWISSENSCHAFT UND WERKSTOFFTECHNIK, Issue 9 2007
    A. Tesch Dr.
    fatigue crack growth; Kmax -tests; threshold; Al 2524-T351 Abstract Ein neues Prüfungskonzept für Ermüdungsrisswachstumsversuche ermöglicht es mit nur einer Probe für den gesamten Bereich des Spannungsverhältnisses von R = 0,9 bis R = -1 Ermüdungsrisswachstumskurven (da/dN-,K-Kurven) für jedes beliebige R-Verhältnis zu erstellen. Zusätzlich erhält man auch die Schwellenwerte der Spannungsintensitätsschwingbreite ,Kth als Funktion von R und Kmax. In Kombination mit einer Methode zur kontinuierlichen Risslängenbestimmung, wie der Gleichstrompotenzialmethode, erfordert dieses Verfahren sehr geringen Personal- und Zeitaufwand. Das Prüfungskonzept besteht aus einer Aneinanderreihung von Kmax -konstant-Versuchen. Da die Kmax -Werte stufenweise ansteigen, sollte es bei diesem Verfahren keine Lastfolgeeffekte geben. Die ermittelten Daten stimmen sehr gut mit Ergebnissen aus da/dN-,K-Versuchen, die nach der ASTM Norm E 647 mit mehreren Proben durchgeführten wurden, überein. Die Versuche erfüllen alle Bedingungen der ASTM E 647. Procedure for the determination of the complete R-dependency of the crack growth behaviour with only one specimen A new concept for fatigue crack propagation tests has been developed. Using a single specimen, it is possible to determine fatigue crack growth curves (da/dN - ,K) for every stress ratio between R = 0.9 and R = -1. Additionally, the new concept also provides threshold values for fatigue crack growth for different values of R and Kmax. In combination with a continuous crack length measurement tool (such as the DC potential drop method) this testing procedure can be performed with minimal effort of personnel and time. The test procedure consists of a sequence of Kmax -constant tests with decreasing crack growth rates. As the applied Kmax is increasing stepwise there should be no load history effects. According to the procedures described in the ASTM Standard E 647, the results using this new testing procedure fit very well to the da/dN - ,K curves generated with different specimens. The tests also fulfil all the requirements of ASTM Standard E 647. [source]


    Phylogeographical structure, postglacial recolonization and barriers to gene flow in the distinctive Valais chromosome race of the common shrew (Sorex araneus)

    MOLECULAR ECOLOGY, Issue 4 2002
    N. Lugon-Moulin
    Abstract Using one male-inherited and eight biparentally inherited microsatellite markers, we investigate the population genetic structure of the Valais chromosome race of the common shrew (Sorex araneus) in the Central Alps of Europe. Unexpectedly, the Y-chromosome microsatellite suggests nearly complete absence of male gene flow among populations from the St-Bernard and Simplon regions (Switzerland). Autosomal markers also show significant genetic structuring among these two geographical areas. Isolation by distance is significant and possible barriers to gene flow exist in the study area. Two different approaches are used to better understand the geographical patterns and the causes of this structuring. Using a principal component analysis for which testing procedure exists, and partial Mantel tests, we show that the St-Bernard pass does not represent a significant barrier to gene flow although it culminates at 2469 m, close to the highest altitudinal record for this species. Similar results are found for the Simplon pass, indicating that both passes represented potential postglacial recolonization routes into Switzerland from Italian refugia after the last Pleistocene glaciations. In contrast with the weak effect of these mountain passes, the Rhône valley lowlands significantly reduce gene flow in this species. Natural obstacles (the large Rhône river) and unsuitable habitats (dry slopes) are both present in the valley. Moreover, anthropogenic changes to landscape structures are likely to have strongly reduced available habitats for this shrew in the lowlands, thereby promoting genetic differentiation of populations found on opposite sides of the Rhône valley. [source]


    The Likelihood Ratio Test for the Rank of a Cointegration Submatrix,

    OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 2006
    Paolo Paruolo
    Abstract This paper proposes a likelihood ratio test for rank deficiency of a submatrix of the cointegrating matrix. Special cases of the test include the one of invalid normalization in systems of cointegrating equations, the feasibility of permanent,transitory decompositions and of subhypotheses related to neutrality and long-run Granger noncausality. The proposed test has a chi-squared limit distribution and indicates the validity of the normalization with probability one in the limit, for valid normalizations. The asymptotic properties of several derived estimators of the rank are also discussed. It is found that a testing procedure that starts from the hypothesis of minimal rank is preferable. [source]


    Feasibility of Spinal Cord Stimulation in Angina Pectoris in Patients with Chronic Pacemaker Treatment for Cardiac Arrhythmias

    PACING AND CLINICAL ELECTROPHYSIOLOGY, Issue 11 2003
    OLOF EKRE
    Spinal cord stimulation (SCS) has been used since 1985 as additional symptom-relieving treatment for patients with severe angina pectoris despite optimal conventional medical and invasive treatment. SCS has antiischemic effects and is safe and effective in long-term use. Several patients with coronary artery disease also suffer from disorders that necessitate the use of a cardiac permanent pacemaker (PPM). The combination of SCS and PPM has previously been considered hazardous because of possible false inhibition of the PPM. To assess if thoracic SCS and PPM can be safely combined in patients with refractory angina pectoris, 18 patients treated with both SCS and PPM were tested. The PPM settings were temporarily modified to increase the probability of interference, while the SCS intensity (used in bipolar mode) was increased to the maximum level tolerated by the patient. Any sign of inhibition of the ventricular pacing was recorded by continuous ECG monitoring. With the aid of a questionnaire, symptoms of interference during long-term treatment were evaluated. No patient had signs of inhibition during the tests. Reprogramming of the pacemaker because of the test results was not needed in any of the patients. The long-term follow-up data revealed no serious events. This study indicates that bipolar SCS and PPM can be safely combined in patients with refractory angina pectoris. However, individual testing is mandatory to ascertain safety in each patient. A testing procedure for patients in need of SCS and PPM is suggested in this article. (PACE 2003; 26:2134,2141) [source]


    Novel method for testing the grease ­resistance of pet food packaging

    PACKAGING TECHNOLOGY AND SCIENCE, Issue 2 2002
    J. Lange
    Abstract For paper-based dry pet food packaging, one of the main requirements is a high resistance against staining from the fat in the product. For both development and quality control, rapid and reliable standardized test procedures assessing this property are needed. Although a number of tests are available, they either apply only to certain types of packaging materials and show limited correlation with field behaviour, or employ non-standard testing substances, long testing times and complicated equipment. In response to this situation, a new testing procedure that reflects field behaviour but without the drawbacks of the existing tests has been developed. The new test shows high reproducibility and good correlation with field performance for a wide range of multiwall bag and folding box materials with different types of grease resistance treatment. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    In vitro quantification of barley reaction to leaf stripe

    PLANT BREEDING, Issue 5 2003
    M. I. E. Arabi
    Abstract An in vitro technique was used to quantify the infection level of leaf stripe in barley caused by Pyrenophora graminea. This pathogen penetrates rapidly through subcrown internodes during seed germination of susceptible cultivars. Quantification was based on the percentage of the pieces of subcrown internodes that produced fungal hyphae cultured on potato dextrose agar media. The disease severity was evaluated among five cultivars with different infection levels and numerical values for each cultivar were obtained. A significant correlation coefficient (r = 0.91, P < 0.02) was found among the in vitro and field assessments. In addition, the results were highly correlated (r = 0.94, P < 0.01) among the different in vitro experiments, indicating that this testing procedure is reliable. The method presented facilitates a rapid preselection under uniform conditions which is of importance from a breeder's point of view. Significant differences (P < 0.001) were found for the length of subcrown internodes between inoculated and non-inoculated plants with leaf stripe. Isolate SY3 was the most effective in reducing the subcrown internode length for all genotypes. [source]


    In vitro quantification of the reaction of barley to common root rot

    PLANT BREEDING, Issue 5 2001
    M. I. Arabi
    Abstract An in vitro technique was used to quantify the infection level of common root rot. This disease produces a brown to black discoloration of the subcrown internodes of barley. Quantification was based on the percentage of germinated infected pieces (1.5 mm) of subcrown internodes cultured on potato dextrose agar media. The disease severity was apparent among four different visually classified categories and numerical values for each category were applied. The results were highly correlated (r = 0.97, P < 0.01) among the different in vitro experiments, indicating that this testing procedure is repeatable. Highly significant differences (P < 0.001) were found for the length of first leaf and fresh weight between plants inoculated and uninoculated with common root rot. However, the effect of inoculation on fresh weight only differed significantly (P < 0.02) among the genotypes. [source]


    Punctuated Equilibrium, Bureaucratization, and Budgetary Changes in Schools

    POLICY STUDIES JOURNAL, Issue 1 2004
    Scott E. Robinson
    For half of a century, models of nonrational behavior have grown in popularity for explaining the behavior of administrative organizations. However, models of nonrational behavior are notoriously difficult to test because nonrational behavior is often difficult to separate from fully rational behavior. Recent research has suggested that particular types of nonrational processes should produce "punctuated" equilibria rather than "instantaneous" equilibria. In these nonrational processes, a decision maker underresponds to changes for a long period of time. Once pressure for change becomes overwhelming, the decision maker adopts a radical change. This is called "punctuation." The key to identifying this type of nonrationality of a process's rationality is the comparative success of fitting the observed behavior to "punctuated" rather than "instantaneous" equilibria. True, Jones, and Baumgartner (1999) developed a method for comparing the distribution of decision outputs as a strategy for assessing the relative degree of "punctuation" in the decision processes. By assessing the kurtosis (or "peakedness") of the distribution of decision outputs, one can get a sense of the excess (compared with a standard, normal distribution) of low and high rates of change,a sign of punctuated equilibrium. This article extends these recent developments by adapting the method to a comparative kurtosis framework. The results suggest that bureaucracy in K,12 schools serves to reduce (rather than amplify) the punctuations in budgeting processes. The article concludes with a discussion of the potential extension of the empirical results and modifications to the testing procedure. [source]


    Testing dispersion effects from general unreplicated fractional factorial designs

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2001
    P. C. Wang
    Abstract Continuous improvement of the quality of industrial products is an essential factor in modern-day manufacturing. The investigation of those factors that affect process mean and process dispersion (standard deviation) is an important step in such improvements. Most often, experiments are executed for such investigations. To detect mean factors, I use the usual analysis of variance on the experimental data. However, there is no unified method to identify dispersion factors. In recent years several methods have been proposed for identifying such factors with two levels. Multilevel factors, especially three-level factors, are common in industrial experiments, but we lack methods for identifying dispersion effects in multilevel factors. In this paper, I develop a method for identifying dispersion effects from general fractional factorial experiments. This method consists of two stages. The first stage involves the identification of mean factors using the performance characteristic as the response. The second stage involves the computation of a dispersion measure and the identification of dispersion factors using the dispersion measure as the response. The sequence for identifying dispersion factors is first to test the significance of the total dispersion effect of a factor, then to test the dispersion contrasts of interest, which is a method similar to the typical post hoc testing procedure in the ANOVA analysis. This familiar approach should be appealing to practitioners. Copyright © 2001 John Wiley & Sons, Ltd. [source]