Home About us Contact | |||
Proposed Procedure (proposed + procedure)
Selected AbstractsOptimal seismic design of steel frame buildings based on life cycle cost considerationsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2003Min Liu Abstract A multi-objective optimization procedure is presented for designing steel moment resisting frame buildings within a performance-based seismic design framework. Life cycle costs are considered by treating the initial material costs and lifetime seismic damage costs as two separate objectives. Practical design/construction complexity, important but difficult to be included in initial cost analysis, is taken into due account by a proposed diversity index as another objective. Structural members are selected from a database of commercially available wide flange steel sections. Current seismic design criteria (AISC-LRFD seismic provisions and 1997 NEHRP provisions) are used to check the validity of any design alternative. Seismic performance, in terms of the maximum inter-storey drift ratio, of a code-verified design is evaluated using an equivalent single-degree-of-freedom system obtained through a static pushover analysis of the original multi-degree-of-freedom frame building. A simple genetic algorithm code is used to find a Pareto optimal design set. A numerical example of designing a five-storey perimeter steel frame building is provided using the proposed procedure. It is found that a wide range of valid design alternatives exists, from which a decision maker selects the one that balances different objectives in the most preferred way. Copyright © 2003 John Wiley & Sons, Ltd. [source] Electroanalytical Determination of Promethazine Hydrochloride in Pharmaceutical Formulations on Highly Boron-Doped Diamond Electrodes Using Square-Wave Adsorptive VoltammetryELECTROANALYSIS, Issue 18 2008Francisco, Wirley Abstract The electrochemical oxidation of promethazine hydrochloride was made on highly boron-doped diamond electrodes. Cyclic voltammetry experiments showed that the oxidation mechanisms involved the formation of an adsorbed product that is more readily oxidized, producing a new peak with lower potential values whose intensity can be increased by applying the accumulation potential for given times. The parameters were optimized and the highest current intensities were obtained by applying +0.78,V for 30 seconds. The square-wave adsorptive voltammetry results obtained in BR buffer showed two well-defined peaks, dependent on the pH and on the voltammetric parameters. The best responses were obtained at pH,4.0, frequency of 50,s,1, step of 2,mV, and amplitude of 50,mV. Under these conditions, linear responses were obtained for concentrations from 5.96×10,7 to 4.76×10,6,mol L,1, and calculated detection limits of 2.66×10,8,mol L,1 (8.51,,g L,1) for peak 1 and of 4.61×10,8,mol L,1 (14.77,,g L,1) for peak 2. The precision and accuracy were evaluated by repeatability and reproducibility experiments, which yielded values of less than 5.00% for both voltammetric peaks. The applicability of this procedure was tested on commercial formulations of promethazine hydrochloride by observing the stability, specificity, recovery and precision of the procedure in complex samples. All results obtained were compared to recommended procedure by British Pharmacopeia. The voltammetric results indicate that the proposed procedure is stable and sensitive, with good reproducibility even when the accumulation steps involve short times. It is therefore very suitable for the development of the electroanalytical procedure, providing adequate sensitivity and a reliable method. [source] Silver Amalgam Film Electrode of Prolonged Application in Stripping ChronopotentiometryELECTROANALYSIS, Issue 18 2007Kapturski Abstract The utility of the cylindrical silver-based mercury film electrode of prolonged analytical application in stripping chronopotentiometry (SCP) was examined. This electrode allowed us to obtain good reproducibility of results owing to the special electrode design, which enables regeneration of the thin layer before each measurement cycle. The accessible potential window in KNO3 (pH,2), acetate and ammonia buffers was defined, and the optimal conditions (i.e., stripping current, deposition potential and deposition time) for the determination of Cd and Pb traces were selected. The detection limits, obtained for an accumulation time of 60,s, were 0.023,,g/L for Cd and 0.075,,g/L for Pb. The response increases linearly with Cd, Pb and Zn concentration, up to at least 100,,g/L. It was also shown that the proposed procedure ensures excellent separation of the In and Tl, Pb and Tl or the In and Cd signals. The method was tested with dolomite and lake sediment samples, and good agreement with reference values was achieved. The obtained results showed good reproducibility (RSD=5,6%) and reliability. [source] Disposable Gold Electrodes with Reproducible Area Using Recordable CDs and Toner MasksELECTROANALYSIS, Issue 1 2006Denise Lowinsohn Abstract The fabrication and characterization of very cheap disposable gold disk electrodes with reproducible area is reported. The innovation of the proposed procedure is the use of toner masks to define reproducible areas on uniform gold surfaces obtained from recordable compact disks (CD-R). Toner masks are drawn in a laser printer and heat transferred to gold surfaces, defining exactly the electrodes area. The electrochemical behavior of these disposable electrodes was investigated by cyclic voltammetry in Fe(CN)64, solutions. The relative standard deviation for signals obtained from 10 different gold electrodes was below 1 %. The size of the disk electrodes can be easily controlled, as attested by voltammetric responses recorded by using electrodes with radii varying from 0.5 to 3.0,mm. The advantages of using this kind of electrode for analytical measurements of substances that strongly adsorb on the electrode surface such as cysteine are also addressed. [source] Non-parametric tests and confidence regions for intrinsic diversity profiles of ecological populationsENVIRONMETRICS, Issue 8 2003Tonio Di Battista Abstract Evaluation of diversity profiles is useful for ecologists to quantify the diversity of biological communities. Measures of diversity profile can be expressed as a function of the unknown abundance vector. Thus, the estimators and related confidence regions and tests of hypotheses involve aspects of multivariate analysis. In this setting, using a suitable sampling design, inference is developed assuming an asymptotic specific distribution of the profile estimator. However, in a biological framework, ecologists work with small sample sizes, and the use of any probability distribution is hazardous. Assuming that a sample belongs to the family of replicated sampling design, we show that the diversity profile estimator can be expressed as a linear combination of the ranked abundance vector estimators. Hence we are able to develop a non-parametric approach based on a bootstrap in order to build balanced simultaneous confidence sets and tests of hypotheses for diversity profiles. Finally, the proposed procedure is applied on the avian populations of four parks in Milan, Italy. Copyright © 2003 John Wiley & Sons, Ltd. [source] Unit commitment at frequency security conditionEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2001X. Lei In island grids and weakly interconnected power systems, the loss of a large proportion of generation will cause the system frequency to fall dramatically. In order to ensure a stable operation with the lowest impact to the system, the disturbed power balance must be equalized within a short specified time by activating the second-reserve of on-line units or by load shedding or both. Unit commitment procedures shall consider these factors to ensure a reliable power supply while minimizing the fuel costs. This paper presents a unit commitment procedure taking the frequency security condition of systems into account. The procedure commits and optimizes units, calculates necessary second-reserve capability, and allocates them among the available on-line units. Parallel to the minimization of daily fuel costs, a specified frequency minimum following the loss of generation is employed as a criterion for maintaining system security. A case study on typical island systems with a large number of different units is demonstrated using the proposed procedure. Results from the study validated robust performance of the proposed procedure that minimizes fuel costs while maintaining frequency security condition. This paper considers only the frequency security condition to be handled. However, it can also be extended with other criteria such transmission capability during transient conditions of interconnected systems. [source] A Simple Correction for Slug Tests in Small-Diameter WellsGROUND WATER, Issue 3 2002James J. Butler Jr. A simple procedure is presented for correcting hydraulic conductivity (K) estimates obtained from slug tests performed in small-diameter installations screened in highly permeable aquifers. Previously reported discrepancies between results from slug tests in small-diameter installations and those from tests in nearby larger-diameter wells are primarily a product of frictional losses within the small-diameter pipe. These frictional losses are readily incorporated into existing models for slug tests in high-K aquifers, which then serve as the basis of a straightforward procedure for correcting previously obtained K estimates. A demonstration of the proposed procedure using data from a series of slug tests performed in a controlled field setting confirms the validity of the approach. The results of this demonstration also reveal the detailed view of spatial variations in K that can be obtained using slug tests in small-diameter installations. [source] Assessment of flooding in urbanized ungauged basins: a case study in the Upper Tiber area, ItalyHYDROLOGICAL PROCESSES, Issue 10 2005T. Moramarco Abstract The reliability of a procedure for investigation of flooding into an ungauged river reach close to an urban area is investigated. The approach is based on the application of a semi-distributed rainfall,runoff model for a gauged basin, including the flood-prone area, and that furnishes the inlet flow conditions for a two-dimensional hydraulic model, whose computational domain is the urban area. The flood event, which occurred in October 1998 in the Upper Tiber river basin and caused significant damage in the town of Pieve S. Stefano, was used to test the approach. The built-up area, often inundated, is included in the gauged basin of the Montedoglio dam (275 km2), for which the rainfall,runoff model was adapted and calibrated through three flood events without over-bank flow. With the selected set of parameters, the hydrological model was found reasonably accurate in simulating the discharge hydrograph of the three events, whereas the flood event of October 1998 was simulated poorly, with an error in peak discharge and time to peak of ,58% and 20%, respectively. This discrepancy was ascribed to the combined effect of the rainfall spatial variability and a partial obstruction of the bridge located in Pieve S. Stefano. In fact, taking account of the last hypothesis, the hydraulic model reproduced with a fair accuracy the observed flooded urban area. Moreover, incorporating into the hydrological model the flow resulting from a sudden cleaning of the obstruction, which was simulated by a ,shock-capturing' one-dimensional hydraulic model, the discharge hydrograph at the basin outlet was well represented if the rainfall was supposed to have occurred in the region near the main channel. This was simulated by reducing considerably the dynamic parameter, the lag time, of the instantaneous unit hydrograph for each homogeneous element into which the basin is divided. The error in peak discharge and time to peak decreased by a few percent. A sensitivity analysis of both the flooding volume involved in the shock wave and the lag time showed that this latter parameter requires a careful evaluation. Moreover, the analysis of the hydrograph peak prediction due to error in rainfall input showed that the error in peak discharge was lower than that of the same input error quantity. Therefore, the obtained results allowed us to support the hypothesis on the causes which triggered the complex event occurring in October 1998, and pointed out that the proposed procedure can be conveniently adopted for flood risk evaluation in ungauged river basins where a built-up area is located. The need for a more detailed analysis regarding the processes of runoff generation and flood routing is also highlighted. Copyright © 2005 John Wiley & Sons, Ltd. [source] A counterfort versus a cantilever retaining wall,a seismic equivalence,INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2005Ashok K. Chugh Abstract A procedure is presented to develop geometric dimensions and material property values for a model cantilever wall from those of a prototype counterfort wall such that the model wall will simulate the response of the prototype wall for seismic loads. The equivalency criteria are given. A sample problem is included to illustrate the use of the proposed procedure. Accuracy of results is discussed. Published in 2005 by John Wiley & Sons, Ltd. [source] An accelerated algorithm for parameter identification in a hierarchical plasticity model accounting for material constraintsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 3 2001L. Simoni Abstract The parameter identification procedure proposed in this paper is based on the solution of an inverse problem, which relies on the minimization of an error function of least-squares type. The solution of the ensuing optimization problem, which is a constrained one owing to the presence of physical links between the optimization parameters, is performed by means of a particular technique of the feasible direction type, which is modified and improved when the problem turns to an unconstrained one. The algorithm is particularly efficient in the presence of hierarchical material models. The numerical properties of the proposed procedure are discussed and its behaviour is compared with usual optimization methods when applied to constrained and unconstrained problems. Copyright © 2001 John Wiley & Sons, Ltd. [source] A 2-D time-domain boundary element method with dampingINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2001Feng Jin Abstract A new material damping model which is convenient for use in the time-domain boundary element method (TDBEM) is presented and implemented in a proposed procedure. Since only fundamental solutions for linear elastic material are employed, the procedure has high efficiency and is easy to be integrated into current TDBEM codes. Analytical and numerical results for benchmark problems demonstrate that the accuracy of the proposed method is high. Copyright © 2001 John Wiley & Sons, Ltd. [source] Identification of autoregressive models in the presence of additive noiseINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 5 2008Roberto Diversi Abstract A common approach in modeling signals in many engineering applications consists in adopting autoregressive (AR) models, consisting in filters with transfer functions having a unitary numerator, driven by white noise. Despite their wide application, these models do not take into account the possible presence of errors on the observations and cannot prove accurate when these errors are significant. AR plus noise models constitute an extension of AR models that consider also the presence of an observation noise. This paper describes a new algorithm for the identification of AR plus noise models that is characterized by a very good compromise between accuracy and efficiency. This algorithm, taking advantage of both low and high-order Yule,Walker equations, also guarantees the positive definiteness of the autocorrelation matrix of the estimated process and allows to estimate the equation error and observation noise variances. It is also shown how the proposed procedure can be used for estimating the order of the AR model. The new algorithm is compared with some traditional algorithms by means of Monte Carlo simulations. Copyright © 2007 John Wiley & Sons, Ltd. [source] Improved bolus arrival time and arterial input function estimation for tracer kinetic analysis in DCE-MRIJOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 1 2009Anup Singh PhD Abstract Purpose To develop a methodology for improved estimation of bolus arrival time (BAT) and arterial input function (AIF) which are prerequisites for tracer kinetic analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data and to verify the applicability of the same in the case of intracranial lesions (brain tumor and tuberculoma). Materials and Methods A continuous piecewise linear (PL) model (with BAT as one of the free parameters) is proposed for concentration time curve C(t) in T1 -weighted DCE-MRI. The resulting improved procedure suggested for automatic extraction of AIF is compared with earlier methods. The accuracy of BAT and other estimated parameters is tested over simulated as well as experimental data. Results The proposed PL model provides a good approximation of C(t) trends of interest and fit parameters show their significance in a better understanding and classification of different tissues. BAT was correctly estimated. The automatic and robust estimation of AIF obtained using the proposed methodology also corrects for partial volume effects. The accuracy of tracer kinetic analysis is improved and the proposed methodology also reduces the time complexity of the computations. Conclusion The PL model parameters along with AIF measured by the proposed procedure can be used for an improved tracer kinetic analysis of DCE-MRI data. J. Magn. Reson. Imaging 2009;29:166,176. © 2008 Wiley-Liss, Inc. [source] Determination of 17,-estradiol in bovine plasma: development of a highly sensitive technique by ion trap gas chromatography,tandem mass spectrometry using negative chemical ionization,JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 12 2002Giancarlo Biancotto Abstract A novel approach to the determination of 17,-estradiol in bovine plasma is presented. The observed enhanced sensitivity is gained by the application of tandem mass spectrometry (MS) fragmentation to a stable, well characterized negative ion produced by chemical ionization (methane as reagent gas). A specific derivatizing reactant is employed (pentafluorobenzyl bromide), combined with bis-trimethylsilyltrifluoroacetamide, to favor the formation of a diagnostic precursor negative ion. Plasma samples are purified through a C18 solid phase extraction column and derivatized before gas chromatography,MS analysis. The accuracy and the precision of the method, tested over a set of spiked samples, were satisfactory. The limit of detection was found to be 5 pg ml,1 and the limit of quantification was fixed at 20 pg ml,1. The fragmentation pattern is fully explained and the method is applicable for the official analysis of bovine plasma for the detection of 17,-estradiol according to the European criteria 256/93 and to the draft SANCO/1805/2000 rev. 3. The quantification of incurred positive samples was performed according to the proposed procedure and compared with the results obtained by standardized radio immuno assay; the estimated concentrations were significantly similar. Copyright © 2002 John Wiley & Sons, Ltd. [source] Chromotropic acid-functionalized polyurethane foam: A new sorbent for on-line preconcentration and determination of cobalt and nickel in lettuce samplesJOURNAL OF SEPARATION SCIENCE, JSS, Issue 9 2006Valfredo Azevedo Lemos Abstract A chromotropic acid-functionalized polyurethane foam has been developed for use in an on-line preconcentration system for cobalt and nickel determination. The packing material was prepared by covalent coupling of chromotropic acid with the polyurethane foam through an azo group. Co and Ni ions were sorbed in the minicolumn, from which they could be eluted directly to the nebulizer-burner system of a flame atomic absorption spectrometer. Elution of cobalt and nickel from the minicolumn can be accomplished with 0.50 and 0.75 M HCl solutions, respectively. The enrichment factors obtained were 22 (Co) and 27 (Ni), for 60 s preconcentration time, and 57 (Co) and 59 (Ni), if a preconcentration time of 180 s was used. Under the optimum conditions, the proposed procedure allowed the determination of metals with detection limits of 0.43 (cobalt) and 0.52 ,g/L (nickel), respectively, on using preconcentration periods of 180 s. The accuracy of the developed procedure was evaluated by analysis of the certified reference materials NIST 1515 Apple Leaves and NIST 1570a Spinach Leaves. The method was applied to the analysis of lettuce samples. The contents of cobalt in the samples analyzed varied from 0.75 to 0.98 ,g/g. Nickel was not detected in the lettuce samples. [source] Penalized spline models for functional principal component analysisJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2006Fang Yao Summary., We propose an iterative estimation procedure for performing functional principal component analysis. The procedure aims at functional or longitudinal data where the repeated measurements from the same subject are correlated. An increasingly popular smoothing approach, penalized spline regression, is used to represent the mean function. This allows straightforward incorporation of covariates and simple implementation of approximate inference procedures for coefficients. For the handling of the within-subject correlation, we develop an iterative procedure which reduces the dependence between the repeated measurements that are made for the same subject. The resulting data after iteration are theoretically shown to be asymptotically equivalent (in probability) to a set of independent data. This suggests that the general theory of penalized spline regression that has been developed for independent data can also be applied to functional data. The effectiveness of the proposed procedure is demonstrated via a simulation study and an application to yeast cell cycle gene expression data. [source] An adaptive empirical Bayesian thresholding procedure for analysing microarray experiments with replicationJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2007Rebecca E. Walls Summary., A typical microarray experiment attempts to ascertain which genes display differential expression in different samples. We model the data by using a two-component mixture model and develop an empirical Bayesian thresholding procedure, which was originally introduced for thresholding wavelet coefficients, as an alternative to the existing methods for determining differential expression across thousands of genes. The method is built on sound theoretical properties and has easy computer implementation in the R statistical package. Furthermore, we consider improvements to the standard empirical Bayesian procedure when replication is present, to increase the robustness and reliability of the method. We provide an introduction to microarrays for those who are unfamilar with the field and the proposed procedure is demonstrated with applications to two-channel complementary DNA microarray experiments. [source] Minimizing the cost of placing and sizing wavelength division multiplexing and optical crossconnect equipment in a telecommunications networkNETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2005Belén Melián Abstract Cost reduction is a major concern when designing optical fiber networks. Multiwavelength optical devices are new technology for increasing the capacity of fiber networks while reducing costs, when compared to installing traditional (e.g., SONET) equipment and new fiber. In this article we discuss the development of a metaheuristic method that seeks to optimize the location of Wavelength Division Multiplexing (WDM) and Optical Crossconnect (OXC) equipment in fiber networks. The procedure combines ideas from the scatter search, tabu search, and multistart methodologies. Computational experiments with both real-world and artificial data show the effectiveness of the proposed procedure. The experiments include a comparison with a permutation-based approach and with lower bounds generated with CPLEX. © 2005 Wiley Periodicals, Inc. NETWORKS, Vol. 45(4), 199,209 2005 [source] Characterization of BOLD activation in multi-echo fMRI data using fuzzy cluster analysis and a comparison with quantitative modelingNMR IN BIOMEDICINE, Issue 7-8 2001Markus Barth Abstract A combination of multiple gradient-echo imaging and exploratory data analysis (EDA), i.e. fuzzy cluster analysis (FCA), is proposed for separation and characterization of BOLD activation in single-shot spiral functional magnetic resonance imaging (fMRI) experiments at 3 T. Differentiation of functional activation using FCA is performed by clustering pixel signal changes (,S) as a function of echo time (TE). Further vascular classification is supported by the localization of activation and the comparison with a single-exponential decay model. In some subjects, an additional indication for large vessels within a voxel was found as oscillation of the fMRI signal difference vs echo time (TE). Such large vessels may be separated from small vessel activation and, therefore, our proposed procedure might prove useful if a more specific functional localization is desired in fMRI. In addition to the signal change ,S, ,T2*/T2* is significantly different between activated regions. Averaged over all eight subjects ,T2* is 1.7,±,0.2,ms in ROIs with the highest signal change characterized as containing large vessels, whereas in ROIs corresponding to microvascular environment average ,T2* values are 0.8,±,0.1,ms. Copyright © 2001 John Wiley & Sons, Ltd. [source] Using Neural Networks to Detect and Classify Out-of-control Signals in Autocorrelated ProcessesQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2003R. Noorossana Abstract This paper presents an artificial neural network model for detecting and classifying three types of non-random disturbances referred to as level shift, additive outlier and innovational outlier which are common in autocorrelated processes. To the best of our knowledge, this is the first time that a neural network has been considered for simultaneous detection and classification of such non-random disturbances. An AR (1) model is considered to characterize the quality characteristic of interest in a continuous process where autocorrelated observations are generated over time. The performance of the proposed procedure is evaluated through the use of a numerical example. Preliminary results indicate that the procedure can be used effectively to detect and classify unusual shocks in autocorrelated processes. Copyright © 2003 John Wiley & Sons, Ltd. [source] Determination of total urinary mercury by on-line sample microwave digestion followed by flow injection cold vapour inductively coupled plasma mass spectrometry or atomic absorption spectrometryRAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 15 2002M. Bettinelli The total mercury content in urine was determined by inductively coupled plasma mass spectrometry with the so-called cold vapour method after on-line oxidative treatment of the sample in a microwave oven (FI-MW-CV-ICPMS). Use of a KBr/KBrO3 mixture, microwave digestion, and the final oxidation with KMnO4, assure the complete recovery of the organic forms of Hg which would be difficult to determine otherwise if using only the CV-ICPMS apparatus. Quantitative recoveries were obtained for phenyl Hg chloride (PMC), dimethyl Hg (DMM), Hg acetate (MA) and methyl Hg chloride (MMC). Use of automatic flow injection microwave systems (FI-MW) for sample treatment reduces environmental contamination and allows detection limits suitable for the determination of reference values. Since no certified reference materials were commercially available in the concentration ranges of interest, the accuracy of the proposed procedure has been assessed by analysing a series of urine samples with two independent techniques, ICP-MS and AAS. When using the FI-MW-CV-ICP-MS technique, the detection limit was assessed at 0.03µg/L Hg, while with FI-MW-CV-AAS it was 0.2µg/L Hg. The precision of the method was less than 2,3% for FI-MW-CV-ICP-MS and about 3,5% for FI-MV-CV-AAS at concentrations below 1µg/L Hg. These results show that ICP-MS can be considered as a "reference technique" for the determination of total urinary Hg at very low concentrations, such as are present in non-exposed subjects. Copyright © 2002 John Wiley & Sons, Ltd. [source] Distinguishing between trend-break models: method and empirical evidenceTHE ECONOMETRICS JOURNAL, Issue 2 2001Chih-Chiang Hsu We demonstrate that in time trend models, the likelihood-based tests of partial parameter stability have size distortions and cannot be applied to detect the changing parameter. A two-step procedure is then proposed to distinguish between different trend-break models. This procedure involves consistent estimation of break dates and properly-sized tests for changing coefficient. In the empirical study of the Nelson-Plosser data set, we find that the estimated change points and trend-break specifications resulting from the proposed procedure are quite different from those of Perron (1989, 1997), Chu and White (1992), and Zivot and Andrews (1992). In another application, our procedure provides formal support for the conclusion of Ben-David and Papell (1995) that real per capita GDPs of most OECD countries exhibit a slope change in trend. [source] A Gaussian approach for continuous time models of the short-term interest rateTHE ECONOMETRICS JOURNAL, Issue 2 2001Jun Yu This paper proposes a Gaussian estimator for nonlinear continuous time models of the short-term interest rate. The approach is based on a stopping time argument that produces a normalizing transformation facilitating the use of a Gaussian likelihood. A Monte Carlo study shows that the finite-sample performance of the proposed procedure offers an improvement over the discrete approximation method proposed by Nowman (1997). An empirical application to US and British interest rates is given. [source] Two-sample Comparison Based on Prediction Error, with Applications to Candidate Gene Association StudiesANNALS OF HUMAN GENETICS, Issue 1 2007K. Yu Summary To take advantage of the increasingly available high-density SNP maps across the genome, various tests that compare multilocus genotypes or estimated haplotypes between cases and controls have been developed for candidate gene association studies. Here we view this two-sample testing problem from the perspective of supervised machine learning and propose a new association test. The approach adopts the flexible and easy-to-understand classification tree model as the learning machine, and uses the estimated prediction error of the resulting prediction rule as the test statistic. This procedure not only provides an association test but also generates a prediction rule that can be useful in understanding the mechanisms underlying complex disease. Under the set-up of a haplotype-based transmission/disequilibrium test (TDT) type of analysis, we find through simulation studies that the proposed procedure has the correct type I error rates and is robust to population stratification. The power of the proposed procedure is sensitive to the chosen prediction error estimator. Among commonly used prediction error estimators, the .632+ estimator results in a test that has the best overall performance. We also find that the test using the .632+ estimator is more powerful than the standard single-point TDT analysis, the Pearson's goodness-of-fit test based on estimated haplotype frequencies, and two haplotype-based global tests implemented in the genetic analysis package FBAT. To illustrate the application of the proposed method in population-based association studies, we use the procedure to study the association between non-Hodgkin lymphoma and the IL10 gene. [source] Optimal predictive densities and fractional momentsAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2009Emanuele Taufer Abstract The maximum entropy approach used together with fractional moments has proven to be a flexible and powerful tool for density approximation of a positive random variable. In this paper we consider an optimality criterion based on the Kullback,Leibler distance in order to select appropriate fractional moments. We discuss the properties of the proposed procedure when all the available information comes from a sample of observations. The method is applied to the size distribution of the U.S. family income. Copyright © 2008 John Wiley & Sons, Ltd. [source] DISCRETIZED SUB-OPTIMAL TRACKER FOR NONLINEAR CONTINUOUS TWO-DIMENSIONAL SYSTEMSASIAN JOURNAL OF CONTROL, Issue 3 2004Chia-Wei Chen ABSTRACT The discretized quadratic sub-optimal tracker for nonlinear continuous two-dimensional (2-D) systems is newly proposed in this paper. The proposed method provides a novel methodology for indirect digital redesign for nonlinear continuous 2-D systems with a continuous performance index. This includes the following features: (1) the 2-D optimal-linearization approach of the nonlinear 2-D Roesser's model (RM), (2) the dynamic programming-based discretized quadratic optimal tracker for linear continuous 2-D systems, (3) the steady-state discretized quadratic sub-optimal tracker for linear continuous 2-D systems, and (4) the discretized quadratic sub-optimal tracker for nonlinear continuous 2-D systems. Illustrative examples are presented to demonstrate the effectiveness of the proposed procedure. [source] A GENERALIZED TWO-STAGE RANDOMIZED RESPONSE PROCEDURE IN COMPLEX SAMPLE SURVEYSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2006Amitava Saha Summary The randomized response (RR) technique pioneered by Warner, S.L. (1965)[Randomised response: a survey technique for eliminating evasive answer bias. J. Amer. Statist. Assoc.60, 63,69] is a useful tool in estimating the proportion of persons in a community bearing sensitive or socially disapproved characteristics. Mangat, N.S. & Singh, R. (1990)[An alternative rendomized response procedure. Biometrika77, 439,442] proposed a modification of Warner's procedure by using two RR techniques. Presented here is a generalized two-stage RR procedure and derivation of the condition under which the proposed procedure produces a more precise estimator of the population parameter. A comparative study on the performance of this two-stage procedure and conventional RR techniques, assuming that the respondents' jeopardy level in this proposed method remains the same as that offered by the traditional RR procedures, is also reported. In addition, a numerical example compares the efficiency of the proposed method with the traditional RR procedures. [source] Determination of rat hepatocellular glutathione by reversed-phase liquid chromatography with fluorescence detection and cytotoxicity evaluation of environmental pollutants based on the concentration changeBIOMEDICAL CHROMATOGRAPHY, Issue 4 2001Toshimasa Toyo'oka Three methods for the determination of rat hepatocellular thiols by high-performance liquid chromatography (HPLC) with fluorescence (FL) detection have been developed. The thiols in the cells were tagged with three fluorogenic reagents, SBD-F, ABD-F and DBD-F. These reagents could permeate into cells and effectively reacted with thiols to produce highly fluorescent derivatives. These derivatives fluoresced in the long wavelength region at around 530,nm (excitation at around 380 nm). The five biological thiols tagged were perfectly separated by reversed-phase liquid chromatography and were sensitively and selectively detected without any interference from endogenous substanaces. The main thiol in the cells was reduced GSH and the concentration was at the,mM level. The proposed procedures were applied to the determination of hepatocellular GSH after treatment of environmental pollutants such as volatile organic compounds (VOC) and endocrine disrupting chemicals (EDC). From the comparison of intracellular GSH concentration, the test compounds were classified into four groups: compounds of strong depletion (eg triphenyltin chloride, hexachlorocyclohexene, nonylphenol, bromoacetic acid, 4-chlorobenzyl chloride and 1,3-dichloropropene), slight decrease (eg bisphenol A, benzo[a]pylene, carbon tetrachloride and benzene), slight increase (eg bromoform and toluene), and no effect (eg 1,1,1-trichloroethane, 1,1,2-trichloroethane and 1,2-dichloroethane). Although the decrease of GSH concentration does not reflect the cytotoxicity of chemicals, the proposed procedure utilizing isolated rat hepatpcytes seems to be useful for investigating the bioactivation of VOC, and EDC, etc. Copyright © 2001 John Wiley & Sons, Ltd. [source] Bayesian Detection and Modeling of Spatial Disease ClusteringBIOMETRICS, Issue 3 2000Ronald E. Gangnon Summary. Many current statistical methods for disease clustering studies are based on a hypothesis testing paradigm. These methods typically do not produce useful estimates of disease rates or cluster risks. In this paper, we develop a Bayesian procedure for drawing inferences about specific models for spatial clustering. The proposed methodology incorporates ideas from image analysis, from Bayesian model averaging, and from model selection. With our approach, we obtain estimates for disease rates and allow for greater flexibility in both the type of clusters and the number of clusters that may be considered. We illustrate the proposed procedure through simulation studies and an analysis of the well-known New York leukemia data. [source] Compensation of actuator delay and dynamics for real-time hybrid structural simulationEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2008M. Ahmadizadeh Abstract Compensation of delay and dynamic response of servo-hydraulic actuators is critical for stability and accuracy of hybrid experimental and numerical simulations of seismic response of structures. In this study, current procedures for compensation of actuator delay are examined and improved procedures are proposed to minimize experimental errors. The new procedures require little or no a priori information about the behavior of the test specimen or the input excitation. First, a simple approach is introduced for rapid online estimation of system delay and actuator command gain, thus capturing the variability of system response through a simulation. Second, an extrapolation procedure for delay compensation, based on the same kinematics equations used in numerical integration procedures is examined. Simulations using the proposed procedures indicate a reduction in high-frequency noise in force measurements that can minimize the excitation of high-frequency modes. To further verify the effectiveness of the compensation procedures, the artificial energy added to a hybrid simulation as a result of actuator tracking errors is measured and used for demonstrating the improved accuracy in the simulations. Copyright © 2007 John Wiley & Sons, Ltd. [source] |