Weighting Factors (weighting + factor)

Distribution by Scientific Domains


Selected Abstracts


Hybrid identification of fuzzy rule-based models

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2002
Sung-Kwun Oh
In this study, we propose a hybrid identification algorithm for a class of fuzzy rule-based systems. The rule-based fuzzy modeling concerns structure optimization and parameter identification using the fuzzy inference methods and hybrid structure combined with two methods of optimization theories for nonlinear systems. Two types of inference methods of a fuzzy model concern a simplified and linear type of inference. The proposed hybrid optimal identification algorithm is carried out using a combination of genetic algorithms and an improved complex method. The genetic algorithms determine initial parameters of the membership function of the premise part of the fuzzy rules. In the sequel, the improved complex method (being in essence a powerful auto-tuning algorithm) leads to fine-tuning of the parameters of the respective membership functions. An aggregate performance index with a weighting factor is proposed in order to achieve a balance between performance of the fuzzy model obtained for the training and testing data. Numerical examples are included to evaluate the performance of the proposed model. They are also contrasted with the performance of the fuzzy models existing in the literature. © 2002 John Wiley & Sons, Inc. [source]


A validated method for the determination of nicotine, cotinine, trans -3,-hydroxycotinine, and norcotinine in human plasma using solid-phase extraction and liquid chromatography-atmospheric pressure chemical ionization-mass spectrometry

JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 6 2006
Insook Kim
Abstract A liquid chromatographic-mass spectrometric method for the simultaneous determination of nicotine, cotinine, trans -3,-hydroxycotinine, and norcotinine in human plasma was developed and validated. Analytes and deuterated internal standards were extracted from human plasma using solid-phase extraction and analyzed by liquid chromatography/atmospheric pressure chemical ionization-mass spectrometric detection with selected ion monitoring (SIM). Limits of detection and quantification were 1.0 and 2.5 ng/ml, respectively, for all analytes. Linearity ranged from 2.5 to 500 ng/ml of human plasma using a weighting factor of 1/x; correlation coefficients for the calibration curves were > 0.99. Intra- and inter-assay precision and accuracy were < 15.0%. Recoveries were 108.2,110.8% nicotine, 95.8,108.7% cotinine, 90.5,99.5% trans -3,-hydroxycotinine, and 99.5,109.5% norcotinine. The method was also partially validated in bovine serum, owing to the difficulty of obtaining nicotine-free human plasma for the preparation of calibrators and quality control (QC) samples. This method proved to be robust and accurate for the quantification of nicotine, cotinine, trans -3,-hydroxycotinine, and norcotinine in human plasma collected in clinical studies of acute nicotine effects on brain activity and on the development of neonates of maternal smokers. Copyright © 2006 John Wiley & Sons, Ltd. [source]


LC,ESI-MS/MS analysis for the quantification of morphine, codeine, morphine-3-,- D -glucuronide, morphine-6-,- D -glucuronide, and codeine-6-,- D -glucuronide in human urine

JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 11 2005
Constance M. Murphy
Abstract A liquid chromatographic-electrospray ionization-tandem mass spectrometric method for the quantification of the opiates morphine, codeine, and their metabolites morphine-3-,- D -glucuronide (M-3-G), morphine-6-,- D -glucuronide (M-6-G) and codeine-6-,- D -glucuronide (C-6-G) in human urine has been developed and validated. Identification and quantification were based on the following transitions: 286 to 201 and 229 for morphine, 300 to 215 and 243 for codeine, 644 to 468 for M-3-G, 462 to 286 for M-6-G, and 476 to 300 for C-6-G. Calibration by linear regression analysis utilized deuterated internal standards and a weighting factor of 1/X. The method was accurate and precise across a linear dynamic range of 25.0 to 4000.0 ng/ml. Pretreatment of urine specimens using solid phase extraction was sufficient to limit matrix suppression to less than 40% for all five analytes. The method proved to be suitable for the quantification of morphine, codeine, and their metabolites in urine specimens collected from opioid-dependent participants enrolled in a methadone maintenance program. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Bi-criteria optimal control of redundant robot manipulators using LVI-based primal-dual neural network

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 3 2010
Binghuang Cai
Abstract In this paper, a bi-criteria weighting scheme is proposed for the optimal motion control of redundant robot manipulators. To diminish the discontinuity phenomenon of pure infinity-norm velocity minimization (INVM) scheme, the proposed bi-criteria redundancy-resolution scheme combines the minimum kinetic energy scheme and the INVM scheme via a weighting factor. Joint physical limits such as joint limits and joint-velocity limits could also be incorporated simultaneously into the scheme formulation. The optimal kinematic control scheme can be reformulated finally as a quadratic programming (QP) problem. As the real-time QP solver, a primal-dual neural network (PDNN) based on linear variational inequalities (LVI) is developed as well with a simple piecewise-linear structure and global exponential convergence to optimal solutions. Since the LVI-based PDNN is matrix-inversion free, it has higher computational efficiency in comparison with dual neural networks. Computer simulations performed based on the PUMA560 manipulator illustrate the validity and advantages of such a bi-criteria neural optimal motion-control scheme for redundant robots. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Vereinfachtes Flächenerfassungsmodell für Mehrzonenbilanzen

BAUPHYSIK, Issue 3 2009
Markus Lichtmeß Dipl.-Ing.
Berechnungsverfahren; Technische Regelwerke Abstract Bei der energetischen Bilanzierung nach DIN V 18599 müssen Gebäude aufgrund unterschiedlicher Nutzungseigenschaften zoniert werden. Auch die Gebäudehüllfläche wird nach diesen Kriterien aufgeteilt und den Zonen zur weiteren Berechnung zugewiesen. In der Praxis ist die Aufteilung der inneren Zonenumschließungs- und der äußeren Gebäudehüllfläche mit einem hohen Arbeitsaufwand verbunden. Etwa 50 % der Zeit wird für die Zonierung und die Ermittlung dieser Flächen- und Bauteileigenschaften benötigt. Zur Verringerung des Zeitaufwandes wurde eine Methode entwickelt, mit welcher die Gebäudehülle ähnlich dem 1-Zonen-Modell erfasst werden kann. Die Hüllfläche wird den Zonen vereinfacht zugeordnet, sodass die eigentliche Berechnung in der Mehrzonenbilanz erfolgen kann. Dies bringt wesentliche Vorteile bei der Auslegung und Optimierung nachgeschalteter Anlagentechnik mit sich. Die Verteilung der thermischen Hüllflächen erfolgt bei diesem vereinfachten Verfahren in Abhängigkeit von der Zonengröße und kann über ein Wichtungsverfahren beeinflusst bzw. korrigiert werden. Untersuchungen an mehreren Gebäuden haben gezeigt, dass die Flächenverteilung mit einer guten Genauigkeit eingesetzt werden kann, wobei eine "intelligente" Zuteilung über ein Wichtungsverfahren erforderlich ist. Die Zeitersparnis bei Anwendung der Vereinfachungen beträgt etwa 30 %. Bei komplexeren, vielzonigen Gebäuden ist die Einsparung tendenziell höher einzuschätzen. Das Verfahren erlaubt, alle Bauteilflächen detailliert auf Zonenebene nachzueditieren und somit die Möglichkeit, das Gebäudemodell planungsbegleitend zu konkretisieren. So kann die Berechnung im Laufe der Projektbearbeitung immer weiter präzisiert werden, wodurch die Berechnungsgenauigkeit und die Optimierungsmöglichkeiten gesteigert werden. Diese Vereinfachungen sollen zukünftig in der Luxemburger EnEV zur energetischen Bewertung von neu zu errichtenden Nichtwohngebäuden nach DIN V 18599 Anwendung finden. A simplified surface area calculation and zoning model for energy performance assessment of buildings. According to the DIN V 18599 energy performance assessment, buildings have to be divided into zones depending on their utilisation. The same zoning applies to the building envelope where the segments are being allocated to the individual zones. In engineering practice about 50% of the work is required for zoning, calculating surface areas and evaluating the properties of building envelope components. In order to reduce the time needed for these efforts a methodology similar to the single zone model has been developed. To carry out the multiple zone calculation the building envelope is being split and allocated to the individual zones in a simplified way. This as well provides a significant advantage for the dimensioning and optimisation of the related HVAC and lighting systems. Within the simplified calculation, the allocation of the building envelope is carried out in dependence of the zone size and corrected with a weighting factor if needed. The analysis of several buildings has shown that the simplified method can be applied with sufficient accuracy. The weighting factors are however necessary. By implementing this simplification the time expenditure to calculate a building is reduced by more or less 30%. This reduction tends to be even more important when dealing with complex buildings which have a high number of zones. The methodology enables building components to be edited by zones and hereby gives the opportunity to easily modify the design during the course of the planning. As the project develops, the calculations can be more detailed thereby increasing the precision of the calculation. In the near future this methodology will be implemented in the Luxembourg energy saving ordinance (EnEV) for the energy performance assessment of non residential buildings. [source]


Metric spaces in NMR crystallography

CONCEPTS IN MAGNETIC RESONANCE, Issue 4 2009
David M. Grant
Abstract The anisotropic character of the chemical shift can be measured by experiments that provide shift tensor values and comparing these experimental components, obtained from microcrystalline powders, with 3D nuclear shielding tensor components, calculated with quantum chemistry, yields structural models of the analyzed molecules. The use of a metric tensor for evaluating the mean squared deviations, d2, between two or more tensors provides a statistical way to verify the molecular structure governing the theoretical shielding components. The sensitivity of the method is comparable with diffraction methods for the heavier organic atoms (i.e., C, O, N, etc.) but considerably better for the positions of H atoms. Thus, the method is especially powerful for H-bond structure, the position of water molecules in biomolecular species, and other proton important structural features, etc. Unfortunately, the traditional Cartesian tensor components appear as reducible metric representations and lack the orthogonality of irreducible icosahedral and irreducible spherical tensors, both of which are also easy to normalize. Metrics give weighting factors that carry important statistical significance in a structure determination. Details of the mathematical analysis are presented and examples given to illustrate the reason nuclear magnetic resonance are rapidly assuming an important synergistic relationship with diffraction methods (X-ray, neutron scattering, and high energy synchrotron irradiation). © 2009 Wiley Periodicals, Inc.Concepts Magn Reson Part A 34A: 217,237, 2009. [source]


A computer program (WDTSRP) designed for computation of sand drift potential (DP) and plotting sand roses

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 6 2007
W. A. Saqqa
Abstract Wind Data Tabulator and Sand Rose Plotter (WDTSRP) is an interactive developed computer program accessible for estimating sand transport potential by winds in barren sandy deserts. The Fryberger (1979) formula for determining sand drift potential (DP) was adopted to create and develop the computer program. WDTSRP is capable of working out weighting factors (WFs), frequency of wind speed occurrence (t), drift potential (DP), resultant drift potential (RDP) and directional variability of winds (DV) and of plotting sand roses. The developed computer program is built up of a simplified system driven by a group of options and dialogue boxes that allow users to input and handle data easily and systematically. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
Peiliang Xu
SUMMARY The method of generalized cross-validation (GCV) has been widely used to determine the regularization parameter, because the criterion minimizes the average predicted residuals of measured data and depends solely on data. The data-driven advantage is valid only if the variance,covariance matrix of the data can be represented as the product of a given positive definite matrix and a scalar unknown noise variance. In practice, important geophysical inverse ill-posed problems have often been solved by combining different types of data. The stochastic model of measurements in this case contains a number of different unknown variance components. Although the weighting factors, or equivalently the variance components, have been shown to significantly affect joint inversion results of geophysical ill-posed problems, they have been either assumed to be known or empirically chosen. No solid statistical foundation is available yet to correctly determine the weighting factors of different types of data in joint geophysical inversion. We extend the GCV method to accommodate both the regularization parameter and the variance components. The extended version of GCV essentially consists of two steps, one to estimate the variance components by fixing the regularization parameter and the other to determine the regularization parameter by using the GCV method and by fixing the variance components. We simulate two examples: a purely mathematical integral equation of the first kind modified from the first example of Phillips (1962) and a typical geophysical example of downward continuation to recover the gravity anomalies on the surface of the Earth from satellite measurements. Based on the two simulated examples, we extensively compare the iterative GCV method with existing methods, which have shown that the method works well to correctly recover the unknown variance components and determine the regularization parameter. In other words, our method lets data speak for themselves, decide the correct weighting factors of different types of geophysical data, and determine the regularization parameter. In addition, we derive an unbiased estimator of the noise variance by correcting the biases of the regularized residuals. A simplified formula to save the time of computation is also given. The two new estimators of the noise variance are compared with six existing methods through numerical simulations. The simulation results have shown that the two new estimators perform as well as Wahba's estimator for highly ill-posed problems and outperform any existing methods for moderately ill-posed problems. [source]


Vector Hankel transform analysis of a tunable circular microstrip patch

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 5 2005
T. Fortaki
Abstract In this paper, a rigorous analysis of the tunable circular microstrip patch is performed using a dyadic Green's function formulation. To make the theoretical formulation more general and hence valid for various antennas structures (not only limited to tunable microstrip patch); the dyadic Green's function is derived when the patch is assumed to be embedded in a multilayered dielectric substrate. A very efficient technique to derive the dyadic Green's function in the vector Hankel transform domain is proposed. Using the vector Hankel transform, the mixed boundary value problem is reduced to a set of vector dual integral equations. Galerkin's method is then applied to solve the integral equation where two sets of disk current expansions are used. One set is based on the complete set of orthogonal modes of the magnetic cavity, and the other consists of combinations of Chebyshev polynomials with weighting factors to incorporate the edge condition. Convergent results for these two sets of disk current expansions are obtained with a small number of basis functions. The calculated resonant frequencies and quality factors are compared with experimental data and shown to be in good agreement. Finally, numerical results for the air gap tuning effect on the resonant frequency and half-power bandwidth are also presented. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Pyrolysis of tetra pack in municipal solid waste

JOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 8 2001
Chao-Hsiung Wu
Abstract The pyrolysis of tetra pack in nitrogen was investigated with a thermogravimetric analysis (TGA) reaction system. The pyrolysis kinetics experiments for the tetra pack and its main components (kraft paper and low-density poly(ethene) (LDPE)) were carried out at heating rates (,) of 5.2, 12.8, 21.8,K,min,1. The results indicated that the one-reaction model and two-reaction model could be used to describe the pyrolysis of LDPE and kraft paper respectively. The total reaction rate of tetra pack can be expressed by the summation of the individual class of LDPE and kraft paper by multiplying the weighting factors. The pyrolysis products experiments were carried out at a constant heating rate of 5.2,K,min,1. The gaseous products were collected at room temperature (298,K) and analyzed by gas chromatography (GC). The residues were collected at some significant pyrolysis reaction temperatures and analyzed by an elemental analyzer (EA) and X-ray powdered diffraction (XRPD). The accumulated masses and the instantaneous concentrations of gaseous products were obtained under the experimental conditions. The major gaseous products included non-hydrocarbons (CO2, CO, and H2O) and hydrocarbons (C1,5). In the XRPD analysis, the results indicated that pure aluminum foil could be obtained from the final residues. The proposed model may be supported by the pyrolysis mechanisms with product distribution. © 2001 Society of Chemical Industry [source]


COMPROMISE PROGRAMMING METHODOLOGY FOR DETERMINING INSTREAM FLOW UNDER MULTIOBJECTIVE WATER ALLOCATION CRITERIA,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 5 2006
Jenq-Tzong Shiau
ABSTRACT: This paper presents a quantitative assessment framework for determining the instream flow under multiobjective water allocation criteria. The Range of Variability Approach (RVA) is employed to evaluate the hydrologic alterations caused by flow diversions, and the resulting degrees of alteration for the 32 Indicators of Hydrologic Alteration (IHAs) are integrated as an overall degree of hydrologic alteration. By including this index in the objective function, it is possible to optimize the water allocation scheme using compromise programming to minimize the hydrologic alteration and water supply shortages. The proposed methodology is applied to a case study of the Kaoping diversion weir in Taiwan. The results indicate that the current release of 9.5 m3/s as a minimum instream flow does not effectively mitigate the highly altered hydrologic regime. Increasing the instream flow would reduce the overall degree of hydrologic alteration; however, this is achieved at the cost of increasing the water supply shortages. The effects on the optimal instream flow of the weighting factors assigned to water supplies and natural flow variations are also investigated. With equal weighting assigned to the multiple objectives, the optimal instream flow of 26 m3/s leads to a less severely altered hydrologic regime, especially for those low-flow characteristics, thereby providing a better protection of the riverine environment. [source]


Relationship Between Aboveground Biomass and Multiple Measures of Biodiversity in Subtropical Forest of Puerto Rico

BIOTROPICA, Issue 3 2010
Heather D. Vance-Chalcraft
ABSTRACT Anthropogenic activities have accelerated the rate of global loss of biodiversity, making it more important than ever to understand the structure of biodiversity hotspots. One current focus is the relationship between species richness and aboveground biomass (AGB) in a variety of ecosystems. Nonetheless, species diversity, evenness, rarity, or dominance represent other critical attributes of biodiversity and may have associations with AGB that are markedly different than that of species richness. Using data from large trees in four environmentally similar sites in the Luquillo Experimental Forest of Puerto Rico, we determined the shape and strength of relationships between each of five measures of biodiversity (i.e., species richness, Simpson's diversity, Simpson's evenness, rarity, and dominance) and AGB. We quantified these measures of biodiversity using either proportional biomass or proportional abundance as weighting factors. Three of the four sites had a unimodal relationship between species richness and AGB, with only the most mature site evincing a positive, linear relationship. The differences between the mature site and the other sites, as well as the differences between our richness,AGB relationships and those found at other forest sites, highlight the crucial role that prior land use and severe storms have on this forest community. Although the shape and strength of relationships differed greatly among measures of biodiversity and among sites, the strongest relationships within each site were always those involving richness or evenness. Abstract in Spanish is available at http://www.blackwell-synergy.com/loi/btp [source]