Methods Lead (methods + lead)

Distribution by Scientific Domains


Selected Abstracts


bcl-2-specific siRNAs restore Gemcitabine sensitivity in human pancreatic cancer cells

JOURNAL OF CELLULAR AND MOLECULAR MEDICINE, Issue 2 2007
Kinya Okamoto
Abstract Gemcitabine has been shown to ameliorate disease related symptoms and to prolong overall survival in pancreatic cancer.Yet, resistance to Gemcitabine is commonly observed in this tumour entity and has been linked to increased expression of anti-apoptotic bcl-2. We therefore investigated if and to what extend silencing of bcl-2 by specific siRNAs (siBCL2) might enhance Gemcitabine effects in human pancreatic carcinoma cells. siBCL2 was transfected into the pancreatic cancer cell line YAP C alone and 72 hrs before co-incubation with different concentrations of Gemcitabine. Total protein and RNA were extracted for Western-blot analysis and quantitative polymerase chain reaction. Pancreatic cancer xenografts in male nude mice were treated intraperitoneally with siBCL2 alone, Gemcitabine and control siRNA or Gemcitabine and siBCL2 for 21 days. Combination of both methods lead to a synergistic induction of apoptosis at otherwise ineffective concentrations of Gemcitabine. Tumour growth suppression was also potentiated by the combined treatment with siBCL2 and Gemcitabine in vivo and lead to increased TUNEL positivity. In contrast, non-transformed human foreskin fibroblasts showed only minor responses to this treatment. Our results demonstrate that siRNA-mediated silencing of anti-apoptotic bcl-2 enhances chemotherapy sensitivity in human pancreatic cancer cells in vitro and might lead to improved therapy responses in advanced stages of this disease. [source]


Extraction of mobile element fractions in forest soils using ammonium nitrate and ammonium chloride

JOURNAL OF PLANT NUTRITION AND SOIL SCIENCE, Issue 3 2008
Alexander Schöning
Abstract The extraction of earth alkaline and alkali metals (Ca, Mg, K, Na), heavy metals (Mn, Fe, Cu, Zn, Cd, Pb) and Al by 1 M NH4NO3 and 0.5 M NH4Cl was compared for soil samples (texture: silt loam, clay loam) with a wide range of pH(CaCl2) and organic carbon (OC) from a forest area in W Germany. For each of these elements, close and highly significant correlations could be observed between the results from both methods in organic and mineral soil horizons. The contents of the base cations were almost convertible one-to-one. However, for all heavy metals NH4Cl extracted clearly larger amounts, which was mainly due to their tendency to form soluble chloro complexes with chloride ions from the NH4Cl solution. This tendency is very distinct in the case of Cd, Pb, and Fe, but also influences the results of Mn and Zn. In the case of Cd and Mn, and to a lower degree also in the case of Pb, Fe, and Zn, the effect of the chloro complexes shows a significant pH dependency. Especially for Cd, but also for Pb, Fe, Mn, Zn, the agreement between both methods increased, when pH(CaCl2) values and/or contents of OC were taken into account. In comparison to NH4Cl, NH4NO3 proved to be chemically less reactive and, thus, more suitable for the extraction of comparable fractions of mobile heavy metals. Since both methods lead to similar and closely correlated results with regard to base cations and Al, the use of NH4NO3 is also recommended for the extraction of mobile/exchangeable alkali, earth alkaline, and Al ions in soils and for the estimation of their contribution to the effective cation-exchange capacity (CEC). Consequently, we suggest to determine the mobile/exchangeable fraction of all elements using the NH4NO3 method. However, the applicability of the NH4NO3 method to other soils still needs to be investigated. [source]


Construction and Optimality of a Special Class of Balanced Designs

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2006
Stefano Barone
Abstract The use of balanced designs is generally advisable in experimental practice. In technological experiments, balanced designs optimize the exploitation of experimental resources, whereas in marketing research experiments they avoid erroneous conclusions caused by the misinterpretation of interviewed customers. In general, the balancing property assures the minimum variance of first-order effect estimates. In this work the authors consider situations in which all factors are categorical and minimum run size is required. In a symmetrical case, it is often possible to find an economical balanced design by means of algebraic methods. Conversely, in an asymmetrical case algebraic methods lead to expensive designs, and therefore it is necessary to adopt heuristic methods. The existing methods implemented in widespread statistical packages do not guarantee the balancing property as they are designed to pursue other optimality criteria. To deal with this problem, the authors recently proposed a new method to generate balanced asymmetrical designs aimed at estimating first- and second-order effects. To reduce the run size as much as possible, the orthogonality cannot be guaranteed. However, the method enables designs that approach the orthogonality as much as possible (near orthogonality). A collection of designs with two- and three-level factors and run size lower than 100 was prepared. In this work an empirical study was conducted to understand how much is lost in terms of other optimality criteria when pursuing balancing. In order to show the potential applications of these designs, an illustrative example is provided. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A New Method for Choosing Sample Size for Confidence Interval,Based Inferences

BIOMETRICS, Issue 3 2003
Michael R. Jiroutek
Summary. Scientists often need to test hypotheses and construct corresponding confidence intervals. In designing a study to test a particular null hypothesis, traditional methods lead to a sample size large enough to provide sufficient statistical power. In contrast, traditional methods based on constructing a confidence interval lead to a sample size likely to control the width of the interval. With either approach, a sample size so large as to waste resources or introduce ethical concerns is undesirable. This work was motivated by the concern that existing sample size methods often make it difficult for scientists to achieve their actual goals. We focus on situations which involve a fixed, unknown scalar parameter representing the true state of nature. The width of the confidence interval is defined as the difference between the (random) upper and lower bounds. An event width is said to occur if the observed confidence interval width is less than a fixed constant chosen a priori. An event validity is said to occur if the parameter of interest is contained between the observed upper and lower confidence interval bounds. An event rejection is said to occur if the confidence interval excludes the null value of the parameter. In our opinion, scientists often implicitly seek to have all three occur: width, validity, and rejection. New results illustrate that neglecting rejection or width (and less so validity) often provides a sample size with a low probability of the simultaneous occurrence of all three events. We recommend considering all three events simultaneously when choosing a criterion for determining a sample size. We provide new theoretical results for any scalar (mean) parameter in a general linear model with Gaussian errors and fixed predictors. Convenient computational forms are included, as well as numerical examples to illustrate our methods. [source]


Concentration of Wax-like Alcohol Ethoxylates with Supercritical Propane

CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 6 2007
C. E. Schwarz
Abstract The virtual insolubility of synthetic wax can be improved by ethoxylating the wax with a polyethylene glycol segment to form an alcohol ethoxylate. Current methods lead to a wide ethylene oxide distribution. This work shows that supercritical propane can fractionate alcohol ethoxylates according to the ethylene oxide content. Solubility measurements in propane at 408,K show total solubility of an alcohol mixture (average 40 carbon atoms) below 140,bar and a region of immiscibility between 0.975 and 0.5 at pressures below 275,bar for the alcohol ethoxylate mixture (average 40 carbon atoms, 50,% ethoxylation). This large solubility difference shows that fractionation of the alcohol ethoxylate with propane is possible. Countercurrent pilot plant runs proved that separation is possible and with energy integration and selection of the correct decompression process the technology is both technically and economically viable. [source]


Comparison of near-infrared emission spectroscopy and the Rancimat method for the determination of oxidative stability

EUROPEAN JOURNAL OF LIPID SCIENCE AND TECHNOLOGY, Issue 1 2007
Fabiano B. Gonzaga
Abstract This work presents a comparison between a new method for the determination of the oxidative stability of edible oils at frying temperatures, based on near-infrared emission spectroscopy (NIRES), and the Rancimat method at 110,,°C. In the NIRES-based method, the induction time (IT) is determined by means of the variation of the emission band at 2900,nm during heating at 160,,°C. The comparison between the IT values obtained with the two methods for 12,samples of edible oils shows some correlation for samples of the same type once there is an agreement on the sequence of highest to lowest IT values between the methods, but a poor correlation considering all samples (correlation coefficient of 0.78). This lack of correlation demonstrates that the results obtained with the Rancimat method cannot be used as an indication of the oxidative stability, or the resistance to degradation, of edible oils at frying temperatures. The difference in the heating temperatures used in the two methods leads to 20,36,times higher IT values for the Rancimat method in relation to the NIRES-based method, but with similar repeatabilities (2.0 and 2.8%, respectively). [source]


Relative accuracy and predictive ability of direct valuation methods, price to aggregate earnings method and a hybrid approach

ACCOUNTING & FINANCE, Issue 4 2006
Lucie Courteau
M41 Abstract In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990,2000) period. Results from both pricing-error and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other. [source]


Sports forecasting: a comparison of the forecast accuracy of prediction markets, betting odds and tipsters

JOURNAL OF FORECASTING, Issue 1 2009
Martin Spann
Abstract This article compares the forecast accuracy of different methods, namely prediction markets, tipsters and betting odds, and assesses the ability of prediction markets and tipsters to generate profits systematically in a betting market. We present the results of an empirical study that uses data from 678,837 games of three seasons of the German premier soccer league. Prediction markets and betting odds perform equally well in terms of forecasting accuracy, but both methods strongly outperform tipsters. A weighting-based combination of the forecasts of these methods leads to a slightly higher forecast accuracy, whereas a rule-based combination improves forecast accuracy substantially. However, none of the forecasts leads to systematic monetary gains in betting markets because of the high fees (25%) charged by the state-owned bookmaker in Germany. Lower fees (e.g., approximately 12% or 0%) would provide systematic profits if punters exploited the information from prediction markets and bet only on a selected number of games. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Performance of algebraic multigrid methods for non-symmetric matrices arising in particle methods

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 2-3 2010
B. Seibold
Abstract Large linear systems with sparse, non-symmetric matrices are known to arise in the modeling of Markov chains or in the discretization of convection,diffusion problems. Due to their potential of solving sparse linear systems with an effort that is linear in the number of unknowns, algebraic multigrid (AMG) methods are of fundamental interest for such systems. For symmetric positive definite matrices, fundamental theoretical convergence results are established, and efficient AMG solvers have been developed. In contrast, for non-symmetric matrices, theoretical convergence results have been provided only recently. A property that is sufficient for convergence is that the matrix be an M-matrix. In this paper, we present how the simulation of incompressible fluid flows with particle methods leads to large linear systems with sparse, non-symmetric matrices. In each time step, the Poisson equation is approximated by meshfree finite differences. While traditional least squares approaches do not guarantee an M-matrix structure, an approach based on linear optimization yields optimally sparse M-matrices. For both types of discretization approaches, we investigate the performance of a classical AMG method, as well as an algebraic multilevel iteration (AMLI) type method. While in the considered test problems, the M-matrix structure turns out not to be necessary for the convergence of AMG, problems can occur when it is violated. In addition, the matrices obtained by the linear optimization approach result in fast solution times due to their optimal sparsity. Copyright © 2010 John Wiley & Sons, Ltd. [source]