Home About us Contact | |||
Existing Methods (existing + methods)
Kinds of Existing Methods Selected AbstractsIsotropic Remeshing with Fast and Exact Computation of Restricted Voronoi DiagramCOMPUTER GRAPHICS FORUM, Issue 5 2009Dong-Ming Yan Abstract We propose a new isotropic remeshing method, based on Centroidal Voronoi Tessellation (CVT). Constructing CVT requires to repeatedly compute Restricted Voronoi Diagram (RVD), defined as the intersection between a 3D Voronoi diagram and an input mesh surface. Existing methods use some approximations of RVD. In this paper, we introduce an efficient algorithm that computes RVD exactly and robustly. As a consequence, we achieve better remeshing quality than approximation-based approaches, without sacrificing efficiency. Our method for RVD computation uses a simple procedure and a kd -tree to quickly identify and compute the intersection of each triangle face with its incident Voronoi cells. Its time complexity is O(mlog n), where n is the number of seed points and m is the number of triangles of the input mesh. Fast convergence of CVT is achieved using a quasi-Newton method, which proved much faster than Lloyd's iteration. Examples are presented to demonstrate the better quality of remeshing results with our method than with the state-of-art approaches. [source] Independence and nonindependence: A simple method for comparing groups using multiple measures and the binomial testEUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY, Issue 2 2005Craig McGarty Existing methods for conducting analyses of small group data are either highly complicated or yield low power. Both of these limitations provide disincentives for the progress of research in this field. An alternative method modelled on the sign (binomial) test which involves comparing the differences of distributions based on multiple observations of each of the groups is presented. The calculations involved in the procedure are extremely simple. It is suggested that because the method enhances researchers' ability to make sound statistical inferences easily this should stimulate research on group-level processes and on social interaction more generally. Copyright © 2004 John Wiley & Sons, Ltd. [source] Time Controlled Protein Release from Layer-by-Layer Assembled Multilayer Functionalized Agarose HydrogelsADVANCED FUNCTIONAL MATERIALS, Issue 2 2010Sumit Mehrotra Abstract Axons of the adult central nervous system exhibit an extremely limited ability to regenerate after spinal cord injury. Experimentally generated patterns of axon growth are typically disorganized and randomly oriented. Support of linear axonal growth into spinal cord lesion sites has been demonstrated using arrays of uniaxial channels, templated with agarose hydrogel, and containing genetically engineered cells that secrete brain-derived neurotrophic factor (BDNF). However, immobilizing neurotrophic factors secreting cells within a scaffold is relatively cumbersome, and alternative strategies are needed to provide sustained release of BDNF from templated agarose scaffolds. Existing methods of loading the drug or protein into hydrogels cannot provide sustained release from templated agarose hydrogels. Alternatively, here it is shown that pH-responsive H-bonded poly(ethylene glycol)(PEG)/poly(acrylic acid)(PAA)/protein hybrid layer-by-layer (LbL) thin films, when prepared over agarose, provided sustained release of protein under physiological conditions for more than four weeks. Lysozyme, a protein similar in size and isoelectric point to BDNF, is released from the multilayers on the agarose and is biologically active during the earlier time points, with decreasing activity at later time points. This is the first demonstration of month-long sustained protein release from an agarose hydrogel, whereby the drug/protein is loaded separately from the agarose hydrogel fabrication process. [source] Haplotype association analysis for late onset diseases using nuclear family dataGENETIC EPIDEMIOLOGY, Issue 3 2006Chun Li Abstract In haplotype-based association studies for late onset diseases, one attractive design is to use available unaffected spouses as controls (Valle et al. [1998] Diab. Care 21:949,958). Given cases and spouses only, the standard expectation-maximization (EM) algorithm (Dempster et al. [1977] J. R. Stat. Soc. B 39:1,38) for case-control data can be used to estimate haplotype frequencies. But often we will have offspring for at least some of the spouse pairs, and offspring genotypes provide additional information about the haplotypes of the parents. Existing methods may either ignore the offspring information, or reconstruct haplotypes for the subjects using offspring information and discard data from those whose haplotypes cannot be reconstructed with high confidence. Neither of these approaches is efficient, and the latter approach may also be biased. For case-control data with some subjects forming spouse pairs and offspring genotypes available for some spouse pairs or individuals, we propose a unified, likelihood-based method of haplotype inference. The method makes use of available offspring genotype information to apportion ambiguous haplotypes for the subjects. For subjects without offspring genotype information, haplotypes are apportioned as in the standard EM algorithm for case-control data. Our method enables efficient haplotype frequency estimation using an EM algorithm and supports probabilistic haplotype reconstruction with the probability calculated based on the whole sample. We describe likelihood ratio and permutation tests to test for disease-haplotype association, and describe three test statistics that are potentially useful for detecting such an association. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source] A Stable and Efficient Numerical Algorithm for Unconfined Aquifer AnalysisGROUND WATER, Issue 4 2009Elizabeth Keating The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. [source] Building channel networks for flat regions in digital elevation modelsHYDROLOGICAL PROCESSES, Issue 20 2009Hua Zhang Abstract Digital elevation models (DEMs) are data sources for distributed rainfall,runoff modelling in terms of providing the channel network for a watershed of interest. Assigning flow directions over flat regions is an important issue in the field of DEM processing and extraction of drainage features. Existing methods cannot fully incorporate the information of known drainage features and terrain surrounding the flat region. This study presented a hydrological correction method that integrates topographic information from different sources to interpolate a convergent surface. It employs radial basis function interpolation to determine elevation increment at every position, utilizes data of digital channel network, incorporates elevation in the surrounding terrain, and ensures a convergent channel network while minimizing the impact of correction on the original DEM. The method can be easily implemented in geographic information system (GIS) environment. It was applied to the DEM of the Heshui Watershed, China. The extracted channel network was visually inspected and quantitatively assessed through analysing the flow direction raster. Results showed that the channel network generated by the hydrological correction was consistent with the known drainage features and contained less parallel channels comparing with the results from two existing methods. Copyright © 2009 John Wiley & Sons, Ltd. [source] On generalization of constitutive models from two dimensions to three dimensionsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 17 2008N. Khalili Abstract In this paper, a study is made of the generalization of constitutive models for geomaterials from two-dimensional stress and strain states to three-dimensional stress and strain states. Existing methods of model generalization are reviewed and their deficiencies are highlighted. A new method is proposed based on geometries of the model imprints on two normal planes. Using the proposed method, various three-dimensional failure criterions suitable for geomaterials are implemented directly into a two-dimensional model and the generalized model is identical to its original form for the axially symmetric condition. To demonstrate the application of the proposed method, the Modified Cam Clay model is extended using the Matsuoka,Nakai failure criterion. Simulations of soil behaviour for loading in the principal stress space are presented and analysed. Copyright © 2008 John Wiley & Sons, Ltd. [source] Theoretical analysis for achieving high-order spatial accuracy in Lagrangian/Eulerian source termsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 8 2006David P. Schmidt Abstract In a fully coupled Lagrangian/Eulerian two-phase calculation, the source terms from computational particles must be agglomerated to nearby gas-phase nodes. Existing methods are capable of accomplishing this particle-to-gas coupling with second-order accuracy. However, higher-order methods would be useful for applications such as two-phase direct numerical simulation and large eddy simulation. A theoretical basis is provided for producing high spatial accuracy in particle-to-gas source terms with low computational cost. The present work derives fourth- and sixth-order accurate methods, and the procedure for even higher accuracy is discussed. The theory is also expanded to include two- and three-dimensional calculations. One- and two-dimensional tests are used to demonstrate the convergence of this method and to highlight problems with statistical noise. Finally, the potential for application in computational fluid dynamics codes is discussed. It is concluded that high-order kernels have practical benefits only under limited ranges of statistical and spatial resolution. Additionally, convergence demonstrations with full CFD codes will be extremely difficult due to the worsening of statistical errors with increasing mesh resolution. Copyright © 2006 John Wiley & Sons, Ltd. [source] Directional leakage and parameter drift,INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2006M. Hovd Abstract A new method for eliminating parameter drift in parameter estimation problems is proposed. Existing methods for eliminating parameter drift work either on a limited time horizon, restricts the parameter estimates to a range that has to be determined a priori, or introduces bias in the parameter estimates which will degrade steady state performance. The idea of the new method is to apply leakage only in the directions in parameter space in which the exciting signal is not informative. This avoids the problem of parameter bias associated with conventional leakage. Copyright © 2005 John Wiley & Sons, Ltd. [source] Small-sample confidence intervals for multivariate impulse response functions at long horizonsJOURNAL OF APPLIED ECONOMETRICS, Issue 8 2006Elena Pesavento Existing methods for constructing confidence bands for multivariate impulse response functions may have poor coverage at long lead times when variables are highly persistent. The goal of this paper is to propose a simple method that is not pointwise and that is robust to the presence of highly persistent processes. We use approximations based on local-to-unity asymptotic theory, and allow the horizon to be a fixed fraction of the sample size. We show that our method has better coverage properties at long horizons than existing methods, and may provide different economic conclusions in empirical applications. We also propose a modification of this method which has good coverage properties at both short and long horizons. Copyright © 2006 John Wiley & Sons, Ltd. [source] A Quantitative Polymerase Chain Reaction Assay for the Detection of Polyscytalum pustulans, the Cause of Skin Spot Disease of PotatoJOURNAL OF PHYTOPATHOLOGY, Issue 3 2009A. K. Lees Abstract Skin spot disease of potato caused by the pathogen Polyscytalum pustulans is likely to become more important with the withdrawal of 2-aminobutane as a fungicide, and new methods of control will need to be found. As part of a disease control strategy, it will be necessary to study the disease in more detail, to utilize host resistance and to identify stocks where problems are likely to arise. Existing methods for the detection and quantification of P. pustulans are time-consuming and require specific expertise. Real-time PCR assays have been developed for many pathogens of potato and have subsequently been used as tools for the study of the epidemiology and control of disease. The development of a real-time PCR assay for the detection and quantification of P. pustulans is described. The specificity of the assay was demonstrated and detection was shown to be reliable at levels as low as 20,250 fg/,l DNA, (equivalent to 60,680 pg DNA/g) in soil and on symptomless tubers at attogram (ag) levels. These values are in line with previously developed tests. [source] The perceived expressed emotion in staff scaleJOURNAL OF PSYCHIATRIC & MENTAL HEALTH NURSING, Issue 1 2003J. FORSTER Recent research has highlighted the role of expressed emotion by ward staff in determining the well-being of psychiatric inpatients. Existing methods of assessing staff expressed emotion involve standardized interviews and are expensive and time-consuming. We report the development of a questionnaire measure of expressed emotion in staff as perceived by patients. In study 1, factor analysis of items administered to patients in a variety of settings led to the development of a questionnaire with three subscales: supportiveness, criticism and intrusiveness. In study 2, the test,retest reliability of the questionnaire was found to be adequate, and some evidence of concurrent validity for the scale was obtained against expressed emotion rated from staff speech samples. In study 3, the scale was shown to have good concurrent validity against the ward atmosphere scale, and scores were found to be independent of insight or experience of admission to hospital. The perceived expressed emotion in staff scale is a convenient measure, which may have utility for both research and clinical purposes. [source] Threshold optimization for weighted voting classifiersNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 4 2003G. Levitin Abstract Weighted voting classifiers considered in this paper consist of N units each providing individual classification decisions. The entire system output is based on tallying the weighted votes for each decision and choosing the one which has total support weight exceeding a certain threshold. Each individual unit may abstain from voting. The entire system may also abstain from voting if no decision support weight exceeds the threshold. Existing methods of evaluating the reliability of weighted voting systems can be applied to limited special cases of these systems and impose some restrictions on their parameters. In this paper a universal generating function method is suggested which allows the reliability of weighted voting classifiers to be exactly evaluated without imposing constraints on unit weights. Based on this method, the classifier reliability is determined as a function of a threshold factor, and a procedure is suggested for finding the threshold which minimizes the cost of damage caused by classifier failures (misclassification and abstention may have different price.) Dynamic and static threshold voting rules are considered and compared. A method of analyzing the influence of units' availability on the entire classifier reliability is suggested, and illustrative examples are presented. © 2003 Wiley Periodicals, Inc. Naval Research Logistics 50: 322,344, 2003. [source] Stop test or pressure-flow study?NEUROUROLOGY AND URODYNAMICS, Issue 3 2004Measuring detrusor contractility in older females Abstract Aims Impaired detrusor contractility is common in older adults. One aspect, detrusor contraction strength during voiding, can be measured by the isovolumetric detrusor pressure attained if flow is interrupted mechanically (a stop test). Because interruption is awkward in practice, however, simple indices or nomograms based on measurements made during uninterrupted voiding are an appealing alternative. We investigated whether such methods, originally developed for males, might be applicable in female subjects, and attempted to identify a single best method. Methods We compared stop-test isovolumetric pressures with estimates based on pressure-flow studies in a group of elderly women suffering from urge incontinence. Measurements were made pre- and post-treatment with placebo or oxybutynin, allowing investigation of test,retest reliability and responsiveness to small changes of contractility. Results Existing methods of estimating detrusor contraction strength from pressure-flow studies, including the Schäfer contractility nomogram and the projected isovolumetric pressure PIP, greatly overestimate the isovolumetric pressure in these female patients. A simple modification provides a more reliable estimate, PIP1, equal to pdet.Qmax,+,Qmax (with pressure in cmH2O and Qmax in ml/sec). Typically PIP1 ranges from 30 to 75 cmH2O in this population of elderly urge-incontinent women. PIP1, however, is less responsive to a small change in contraction strength than the isovolumetric pressure measured by mechanical interruption. Conclusions The parameter PIP1 is simple to calculate from a standard pressure-flow study and may be useful for clinical assessment of detrusor contraction strength in older females. For research, however, a mechanical stop test still remains the most reliable and responsive method. The Schäfer contractility nomogram and related parameters such as DECO and BCI are not suitable for use in older women. Neurourol. Urodynam. 23:184,189, 2004. © 2004 Wiley-Liss, Inc. [source] Capturing Government Policy on the Left,Right Scale: Evidence from the United Kingdom, 1956,2006POLITICAL STUDIES, Issue 4 2009Armèn Hakhverdian The left,right scheme is the most widely used and parsimonious representation of political competition. Yet, long time series of the left,right position of governments are sparse. Existing methods are of limited use in dynamic settings due to insufficient time points which hinders the proper specification of time-series regressions. This article analyses legislative speeches in order to construct an annual left,right policy variable for Britain from 1956 to 2006. Using a recently developed content analysis tool, known as Wordscores, it is shown that speeches yield valid and reliable estimates for the left,right position of British government policy. Long time series such as the one proposed in this article are vital to building dynamic macro-level models of politics. This measure is cross-validated with four independent sources: (1) it compares well to expert surveys; (2) a rightward trend is found in post-war British government policy; (3) Conservative governments are found to be more right wing in their policy outputs than Labour governments; (4) conventional accounts of British post-war politics support the pattern of government policy movement on the left,right scale. [source] A strict solution for the optimal superimposition of protein structuresACTA CRYSTALLOGRAPHICA SECTION A, Issue 3 2004Chuanbo Chen Existing methods for the optimal superimposition of one vector set on another in the comparison of parts or the whole of related protein molecules are based on the precondition that the centroids of the two sets are coincident. As a result, the translation components of the transformation are artificially removed from the superimposition process. This is obviously not strict in the mathematical sense. The theorem presented in this paper is a strict solution for the optimal superimposition of two vector sets, which is in fact the problem of the weighted optimal rigid superimposition of two vector sets. Examples show its advantages compared with the method of simply coinciding the centroids of the two vector sets for the translation transformation. [source] How to reach the ,hard-to-reach': the development of Participatory Geographic Information Systems (P-GIS) for inclusive urban design in UK citiesAREA, Issue 2 2010Steve Cinderby Sustainable development and successful urban regeneration ideally require engagement with the affected communities. Existing methods employed by policymakers and planners often fail to reach significant segments of communities, the so-called ,hard-to-reach'. This paper describes the development of an innovative participatory GIS methodology specifically aimed at overcoming the barriers to engagement experienced by these groups. The application of the method is illustrated with reference to three recent case studies carried out in UK cities. The paper will then discuss the novelty of this approach in comparison with other participatory engagement techniques. The ethical implications of the technique are also discussed. [source] Robust Joint Modeling of Longitudinal Measurements and Competing Risks Failure Time DataBIOMETRICAL JOURNAL, Issue 1 2009Ning Li Abstract Existing methods for joint modeling of longitudinal measurements and survival data can be highly influenced by outliers in the longitudinal outcome. We propose a joint model for analysis of longitudinal measurements and competing risks failure time data which is robust in the presence of outlying longitudinal observations during follow-up. Our model consists of a linear mixed effects sub-model for the longitudinal outcome and a proportional cause-specific hazards frailty sub-model for the competing risks data, linked together by latent random effects. Instead of the usual normality assumption for measurement errors in the linear mixed effects sub-model, we adopt a t -distribution which has a longer tail and thus is more robust to outliers. We derive an EM algorithm for the maximum likelihood estimates of the parameters and estimate their standard errors using a profile likelihood method. The proposed method is evaluated by simulation studies and is applied to a scleroderma lung study (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A Multilevel Model for Continuous Time Population EstimationBIOMETRICS, Issue 3 2009Jason M. Sutherland Summary Statistical methods have been developed and applied to estimating populations that are difficult or too costly to enumerate. Known as multilist methods in epidemiological settings, individuals are matched across lists and estimation of population size proceeds by modeling counts in incomplete multidimensional contingency tables (based on patterns of presence/absence on lists). As multilist methods typically assume that lists are compiled instantaneously, there are few options available for estimating the unknown size of a closed population based on continuously (longitudinally) compiled lists. However, in epidemiological settings, continuous time lists are a routine byproduct of administrative functions. Existing methods are based on time-to-event analyses with a second step of estimating population size. We propose an alternative approach to address the twofold epidemiological problem of estimating population size and of identifying patient factors related to duration (in days) between visits to a health care facility. A Bayesian framework is proposed to model interval lengths because, for many patients, the data are sparse; many patients were observed only once or twice. The proposed method is applied to the motivating data to illustrate the methods' applicability. Then, a small simulation study explores the performance of the estimator under a variety of conditions. Finally, a small discussion section suggests opportunities for continued methodological development for continuous time population estimation. [source] Likelihood Analysis for the Ratio of Means of Two Independent Log-Normal DistributionsBIOMETRICS, Issue 2 2002Jianrong Wu Summary. Existing methods for comparing the means of two independent skewed log-normal distributions do not perform well in a range of small-sample settings such as a small-sample bioavailability study. In this article, we propose two likelihood-based approaches,the signed log-likelihood ratio statistic and modified signed log-likelihood ratio statistic,for inference about the ratio of means of two independent log-normal distributions. More specifically, we focus on obtaining p -values for testing the equality of means and also constructing confidence intervals for the ratio of means. The performance of the proposed methods is assessed through simulation studies that show that the modified signed log-likelihood ratio statistic is nearly an exact approach even for very small samples. The methods are also applied to two real-life examples. [source] Directable animation of elastic bodies with point-constraintsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008Ryo Kondo Abstract We propose a simple framework for making elastic body animation with point constraints. In general, a physics-based approach for constraint animation offers a variety of animations with physically correct realism, which are achieved by solving the equations of motion. However, in the digital animation industry, solving the equations of motion is an indirect path to creating more art-directed animations that maintain a plausible realism. Our algorithms provide animators a practical way to make elastic body animation with plausible realism, while effectively using point-constraints to offer directatorial control. The animation examples illustrate that our framework creates a wide variety of point-constraint animations of elastic objects with greater directability than existing methods. Copyright © 2008 John Wiley & Sons, Ltd. [source] A framework for quad/triangle subdivision surface fitting: Application to mechanical objectsCOMPUTER GRAPHICS FORUM, Issue 1 2007Guillaume Lavoué Abstract In this paper we present a new framework for subdivision surface approximation of three-dimensional models represented by polygonal meshes. Our approach, particularly suited for mechanical or Computer Aided Design (CAD) parts, produces a mixed quadrangle-triangle control mesh, optimized in terms of face and vertex numbers while remaining independent of the connectivity of the input mesh. Our algorithm begins with a decomposition of the object into surface patches. The main idea is to approximate the region boundaries first and then the interior data. Thus, for each patch, a first step approximates the boundaries with subdivision curves (associated with control polygons) and creates an initial subdivision surface by linking the boundary control points with respect to the lines of curvature of the target surface. Then, a second step optimizes the initial subdivision surface by iteratively moving control points and enriching regions according to the error distribution. The final control mesh defining the whole model is then created assembling every local subdivision control meshes. This control polyhedron is much more compact than the original mesh and visually represents the same shape after several subdivision steps, hence it is particularly suitable for compression and visualization tasks. Experiments conducted on several mechanical models have proven the coherency and the efficiency of our algorithm, compared with existing methods. [source] Speed Estimation from Single Loop Data Using an Unscented Particle FilterCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2010Zhirui Ye The Kalman filters used in past speed estimation studies employ a Gaussian assumption that is hardly satisfied. The hybrid method that combines a parametric filter (Unscented Kalman Filter) and a nonparametric filter (Particle Filter) is thus proposed to overcome the limitations of the existing methods. To illustrate the advantage of the proposed approach, two data sets collected from field detectors along with a simulated data set are utilized for performance evaluation and comparison with the Extended Kalman Filter and the Unscented Kalman Filter. It is found that the proposed method outperforms the evaluated Kalman filter methods. The UPF method produces accurate speed estimation even for congested flow conditions in which many other methods have significant accuracy problems. [source] Active target particle swarm optimizationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2008Ying-Nan Zhang Abstract We propose an active target particle swarm optimization (APSO). APSO uses a new three-target velocity updating formula, i.e. the best previous position, the global best position and a new target position (called active target). In this study, we distinguish APSO from EPSO (extended PSO)/PSOPC (PSO with passive congregation) by the different methods of getting the active target. Note that here EPSO and PSOPC are the two existing methods for using three-target velocity updating formula, and getting the third (active) target from the obtained positions by the swarm. The proposed APSO gets the active (third) target using complex method, where the active target does not belong to the existing positions. We find that the APSO has the advantages of jumping out of the local optimum and keeping diversity; however, it also has the disadvantages of adding some extra computation expenses. The experimental results show the competitive performance of APSO when compared with PSO, EPSO, and PSOPC. Copyright © 2007 John Wiley & Sons, Ltd. [source] Evaluation of the skin sensitizing potency of chemicals by using the existing methods and considerations of relevance for elicitationCONTACT DERMATITIS, Issue 1 2005David A. Basketter The Technical Committee of Classification and Labelling dealing with harmonized classification of substances and classification criteria under Directive 67/548/EEC on behalf of the European Commission nominated an expert group on skin sensitization in order to investigate further the possibility for potency consideration of skin sensitizers for future development of the classification criteria. All substances and preparations should be classified on the basis of their intrinsic properties and should be labelled accordingly with the rules set up in the Directive 67/548/EEC. The classification should be the same under their full life cycle and in the case that there is no harmonized classification the substance or preparation should be self-classified by the manufacturer in accordance with the same criteria. The Directive does not apply to certain preparations in the finished state, such as medical products, cosmetics, food and feeding stuffs, which are subject to specific community legislation. The main questions that are answered in this report are whether it would be possible to give detailed guidance on how to grade allergen potency based on the existing methods, whether such grading could be translated into practical thresholds and whether these could be set for both induction and elicitation. Examples are given for substances falling into various potency groups for skin sensitization relating to results from the local lymph node assay, the guinea pig maximization test, the Buehler method and human experience. [source] How correct is the EOS of weakly nonideal hydrogen plasmas?CONTRIBUTIONS TO PLASMA PHYSICS, Issue 5-6 2003A.N. Starostin Abstract Helioseismology opens new possibility to check EOS of weakly nonideal hydrogen plasmas with high precision, using reconstructed local sound velocities within 10-4 accuracy. A comparison of different theoretical models with experiment permits to verify the existing methods of calculation bound states and continuum contribution to the second virial coefficient within the framework of physical nature. The regular way of the deduction expression for EOS is presented and generalization of the EOS for broad atomic states and two temperature non-equilibrium case is proposed. (© 2003 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Determination of bankfull discharge magnitude and frequency: comparison of methods on 16 gravel-bed river reachesEARTH SURFACE PROCESSES AND LANDFORMS, Issue 11 2006O. Navratil Abstract Bankfull discharge is identified as an important parameter for studying river morphology, sediment motion, flood dynamics and their ecological impacts. In practice, the determination of this discharge and its hydrological characteristics is not easy, and a choice has to be made between several existing methods. To evaluate the impact of the choice of methods, five bankfull elevation definitions and four hydrological characterizations (determination of duration and frequency of exceedance applied to instantaneous or mean daily data) were compared on 16 gravel-bed river reaches located in France (the catchment sizes vary from 10 km2 to 1700 km2). The consistency of bankfull discharge estimated at reach scale and the hydraulic significance of the five elevation definitions were examined. The morphological definitions (Bank Inflection, Top of Bank) were found more relevant than the definitions based on a geometric criterion. The duration of exceedance was preferred to recurrence intervals (partial duration series approach) because it is not limited by the independency of flood events, especially for low discharges like those associated with the Bank Inflection definition. On average, the impacts of the choice of methods were very important for the bankfull discharge magnitude (factor of 1·6 between Bank Inflection and Top of Bank) and duration of exceedance or frequency (respectively a factor 1·8 and 1·9 between mean daily and instantaneous discharge data). The choice of one combination of methods rather than another can significantly modify the conclusions of a comparative analysis in terms of bankfull discharge magnitude and its hydrological characteristics, so that one must be cautious when comparing results from different studies that use different methods. Copyright © 2006 John Wiley & Sons, Ltd. [source] ESTIMATING INCUMBENCY EFFECTS IN U.S. STATE LEGISLATURES: A QUASI-EXPERIMENTAL STUDYECONOMICS & POLITICS, Issue 2 2010YOGESH UPPAL This paper estimates the incumbency effects in elections to the House of Representatives of 45 states in the United States using a quasi-experimental research method, regression discontinuity design (RDD). This design isolates the causal effect of incumbency from other contemporaneous factors, such as candidate quality, by comparing incumbents and non-incumbents in close contests. I find that incumbents in state legislative elections have a significant advantage, and this advantage serves as a strong barrier to re-entry of challengers who had previously been defeated. However, the incumbency advantage estimated using the RDD is much smaller than are the estimates using existing methods, implying a significant selection bias in the latter. [source] Improved workup for glycosaminoglycan disaccharide analysis using CE with LIF detectionELECTROPHORESIS, Issue 22 2008Alicia M. Hitchcock Abstract This work describes improved workup and instrumental conditions to enable robust, sensitive glycosaminoglycan (GAG) disaccharide analysis from complex biological samples. In the process of applying CE with LIF to GAG disaccharide analysis in biological samples, we have made improvements to existing methods. These include (i) optimization of reductive amination conditions, (ii) improvement in sensitivity through the use of a cellulose cleanup procedure for the derivatization, and (iii) optimization of separation conditions for robustness and reproducibility. The improved method enables analysis of disaccharide quantities as low as 1,pmol prior to derivatization. Biological GAG samples were exhaustively digested using lyase enzymes, the disaccharide products and standards were derivatized with the fluorophore 2-aminoacridone and subjected to reversed polarity CE-LIF detection. These conditions resolved all known chondroitin sulfate (CS) disaccharides or 11 of 12 standard heparin/heparan sulfate disaccharides, using 50,mM phosphate buffer, pH 3.5, and reversed polarity at 30,kV with 0.3,psi pressure. Relative standard deviation in migration times of CS ranged from 0.1 to 2.0% over 60 days, and the relative standard deviations of peak areas were less than 3.2%, suggesting that the method is reproducible and precise. The CS disaccharide compositions are similar to those obtained by our group using tandem MS. The reversed polarity CE-LIF disaccharide analysis protocol yields baseline resolution and quantification of heparin/heparan sulfate and CS/dermatan sulfate disaccharides from both standard preparations and biologically relevant proteoglycan samples. The improved CE-LIF method enables disaccharide quantification of biologically relevant proteoglycans from small samples of intact tissue. [source] A novel approach for analysis of oligonucleotide,cisplatin interactions by continuous elution gel electrophoresis coupled to isotope dilution inductively coupled plasma mass spectrometry and matrix-assisted laser desorption/ionization mass spectrometryELECTROPHORESIS, Issue 7 2008Wolfram Brüchert Abstract In this work we present a novel approach for in vitro studies of cisplatin interactions with 8-mer oligonucleotides. The approach is based on the recently developed coupling of continuous elution gel electrophoresis (GE) to an inductively coupled plasma-sector field mass spectrometer (ICP-SFMS) with the aim of monitoring the interaction process between this cytostatic drug and the nucleotides. In contrast to existing methods, the electrophoretic separation conditions used here allow both the determination of the reaction kinetics in more detail as well as the observation of dominant intermediates. Two different nucleotides sequences have been investigated for comparison purposes, one containing two adjacent guanines (5,-TCCGGTCC-3,) and one with a combination of thymine and guanine (5,-TCCTGTCC-3,), respectively. In order to gain further structural information, MALDI-TOF MS measurements have been performed after fraction collection. This allows for identification of the intermediates and the final products and confirms the stepwise coordination of cisplatin via monoadduct to bisadduct formation. Furthermore, the ICP-MS results were quantitatively evaluated in order to calculate the kinetics of the entire process. [source] |