Appropriate Parameter (appropriate + parameter)

Distribution by Scientific Domains


Selected Abstracts


Behavior of Corophium volutator (Crustacea, Amphipoda) exposed to the water-accommodated fraction of oil in water and sediment

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 3 2008
Cornelia Kienle
Abstract We investigated the short-term effects of the water accommodated fraction (WAF) of weathered Forties crude oil on the behavior of Corophium volutator in the Multispecies Freshwater Biomonitor® (MFB). When exposing C. volutator to 25 and 50% WAF in aqueous exposures, hyperactivity with an additional increase in ventilation was detected, whereas exposure to 100% WAF led to hypoactivity (narcosis). In a sediment exposure with 100% WAF, there was an increased tendency toward hyperactivity. In a pulse experiment, hyperactivity appeared at and after a 130-min exposure to 50% WAF in a majority of cases. Our experiments suggest that the behavior of C. volutator as measured in the MFB may be an appropriate parameter for coastal monitoring. [source]


Estimating the spatial distribution of available biomass in grazing forests with a satellite image: A preliminary study

GRASSLAND SCIENCE, Issue 2 2005
Michio Tsutsumi
Abstract We tested whether available biomass in grazing forests could be estimated by analyzing a satellite image with field data. Our study site was situated in north-eastern Japan and was composed of coniferous forest differing in afforested years, multilayered coniferous forest and deciduous broadleaf forest. The data of available biomass collected in previous studies was used to analyze a Landsat Thematic Mapper (TM) image acquired in summer and we tried to depict a map of the spatial distribution of available biomass in the forest. It was suggested that an analysis should be conducted separately in each of the multilayered coniferous, the other coniferous and broadleaf forests. As a result of regression analysis on the relationship between available biomass and each of several parameters, the first principal component computed with reflectance of the six bands of the Landsat TM was the most appropriate parameter to estimate available biomass. The answer to the question ,Can the spatial distribution of available biomass in a forest be estimated with a satellite image?' is ,Yes, in coniferous forests'. We propose a procedure for depicting a precise map of the distribution of available biomass in a forest with analysis of a satellite image. [source]


Use of image analysis techniques for objective quantification of the efficacy of different hair removal methods

INTERNATIONAL JOURNAL OF COSMETIC SCIENCE, Issue 2 2007
S. Bielfeldt
In the field of consumer-used cosmetics for hair removal and hair growth reduction, there is a need for improved quantitative methods to enable the evaluation of efficacy and claim support. Optimized study designs and investigated endpoints are lacking to compare the efficacy of standard methods, like shaving or plucking, with new methods and products, such as depilating instruments or hair-growth-reducing cosmetics. Non-invasive image analysis, using a high-performance microscope combined with an optimized image analysis tool, was investigated to assess hair growth. In one step, high-resolution macrophotographs of the legs of female volunteers after shaving and plucking with cold wax were compared to observe short-term hair regrowth. In a second step, images obtained after plucking with cold wax were taken over a long-term period to assess the time, after which depilated hairs reappeared on the skin surface. Using image analysis, parameters like hair length, hair width, and hair projection area were investigated. The projection area was found to be the parameter most independent of possible image artifacts such as irregularities in skin or low contrast due to hair color. Therefore, the hair projection area was the most appropriate parameter to determine the time of hair regrowth. This point of time is suitable to assess the efficacy of different hair removal methods or hair growth reduction treatments by comparing the endpoint after use of the hair removal method to be investigated to the endpoint after simple shaving. The closeness of hair removal and visible signs of skin irritation can be assessed as additional quantitative parameters from the same images. Discomfort and pain rating by the volunteers complete the set of parameters, which are required to benchmark a new hair removal method or hair-growth-reduction treatment. Image analysis combined with high-resolution imaging techniques is a powerful tool to objectively assess parameters like hair length, hair width, and projection area. To achieve reliable data and to reduce well known image-analysis artifacts, it was important to optimize the technical equipment for use on human skin and to improve image analysis by adaptation of the image-processing procedure to the different skin characteristics of individuals, like skin color, hair color, and skin structure. [source]


Polynomial control: past, present, and future

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 8 2007
Vladimír Ku
Abstract Polynomial techniques have made important contributions to systems and control theory. Engineers in industry often find polynomial and frequency domain methods easier to use than state equation-based techniques. Control theorists show that results obtained in isolation using either approach are in fact closely related. Polynomial system description provides input,output models for linear systems with rational transfer functions. These models display two important system properties, namely poles and zeros, in a transparent manner. A performance specification in terms of polynomials is natural in many situations; see pole allocation techniques. A specific control system design technique, called polynomial equation approach, was developed in the 1960s and 1970s. The distinguishing feature of this technique is a reduction of controller synthesis to a solution of linear polynomial equations of a specific (Diophantine or Bézout) type. In most cases, control systems are designed to be stable and meet additional specifications, such as optimality and robustness. It is therefore natural to design the systems step by step: stabilization first, then the additional specifications each at a time. For this it is obviously necessary to have any and all solutions of the current step available before proceeding any further. This motivates the need for a parametrization of all controllers that stabilize a given plant. In fact this result has become a key tool for the sequential design paradigm. The additional specifications are met by selecting an appropriate parameter. This is simple, systematic, and transparent. However, the strategy suffers from an excessive grow of the controller order. This article is a guided tour through the polynomial control system design. The origins of the parametrization of stabilizing controllers, called Youla,Ku,era parametrization, are explained. Standard results on reference tracking, disturbance elimination, pole placement, deadbeat control, H2 control, l1 control and robust stabilization are summarized. New and exciting applications of the Youla,Ku,era parametrization are then discussed: stabilization subject to input constraints, output overshoot reduction, and fixed-order stabilizing controller design. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Cotinine as a biomarker of tobacco exposure: Development of a HPLC method and comparison of matrices

JOURNAL OF SEPARATION SCIENCE, JSS, Issue 4-5 2010
Guilherme Oliveira Petersen
Abstract Tobacco dependence reaches one-third of the world population, and is the second leading cause of death around the world. Cotinine, a major metabolite of nicotine, is the most appropriate parameter to evaluate tobacco exposure and smoking status due to its higher stability and half-life when compared to nicotine. The procedure involves liquid,liquid extraction, separation on a RP column (Zorbax® XDB C8), isocratic pump (0.5,mL/min of water,methanol,sodium acetate (0.1,M),ACN (50:15:25:10, v/v/v/v), 1.0,mL of citric acid (0.034,M) and 5.0,mL of triethylamine for each liter) and HPLC-UV detection (261,nm). The analytical procedure proved to be sensitive, selective, precise, accurate and linear (r>0.99) in the range of 5,500.0,ng/mL for cotinine. 2-Phenylimidazole was used as the internal standard. The LOD was 0.18,ng/mL and the LOQ was 5.0,ng/mL. All samples from smoking volunteers were collected simultaneously to establish a comparison between serum, plasma, and urine. The urinary cotinine levels were normalized by the creatinine and urine density. A significant correlation was found (p<0.01) between all matrices. Results indicate that the urine normalization by creatinine or density is unnecessary. This method is considered reliable for determining cotinine in serum and plasma of smokers and in environmental tobacco smoke exposure. [source]


The way composition affects martensitic transformation temperatures of Ni,Mn,Ga Heusler alloys

PHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 3 2007
X. Q. Chen
Abstract A systematic substitution of Ge, Si, C and Co for Ga in the non-stoichiometric Ni,Mn,Ga alloys was performed. The relationship between the compositions of different elements including Ni, Mn, Ga, Ge, Si, C, Co, In and martensitic transformation temperature (Ms) was studied in detail for the present alloys together with data collected from a variety of sources. It is found that Ms is a sensitive parameter to the composition. The size factor and electron concentration are usually thought to be the way the composition influences Ms in the Ni,Mn,Ga alloys. Here, analyzing by linear regression, the electron density maybe a most appropriate parameter to describe the way the composition influences Ms when compared with size factor and electron concentration. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Efficiency of base isolation systems in structural seismic protection and energetic assessment

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2003
Giuseppe Carlo Marano
Abstract This paper concerns the seismic response of structures isolated at the base by means of High Damping Rubber Bearings (HDRB). The analysis is performed by using a stochastic approach, and a Gaussian zero mean filtered non-stationary stochastic process is used in order to model the seismic acceleration acting at the base of the structure. More precisely, the generalized Kanai,Tajimi model is adopted to describe the non-stationary amplitude and frequency characteristics of the seismic motion. The hysteretic differential Bouc,Wen model (BWM) is adopted in order to take into account the non-linear constitutive behaviour both of the base isolation device and of the structure. Moreover, the stochastic linearization method in the time domain is adopted to estimate the statistical moments of the non-linear system response in the state space. The non-linear differential equation of the response covariance matrix is then solved by using an iterative procedure which updates the coefficients of the equivalent linear system at each step and searches for the solution of the response covariance matrix equation. After the system response variance is estimated, a sensitivity analysis is carried out. The final aim of the research is to assess the real capacity of base isolation devices in order to protect the structures from seismic actions, by avoiding a non-linear response, with associated large plastic displacements and, therefore, by limiting related damage phenomena in structural and non-structural elements. In order to attain this objective the stochastic response of a non-linear n -dof shear-type base-isolated building is analysed; the constitutive law both of the structure and of the base devices is described, as previously reported, by adopting the BWM and by using appropriate parameters for this model, able to suitably characterize an ordinary building and the base isolators considered in the study. The protection level offered to the structure by the base isolators is then assessed by evaluating the reduction both of the displacement response and the hysteretic dissipated energy. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Routine clinical brain MRI sequences for use at 3.0 Tesla

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 1 2005
Hanzhang Lu PhD
Abstract Purpose To establish image parameters for some routine clinical brain MRI pulse sequences at 3.0 T with the goal of maintaining, as much as possible, the well-characterized 1.5-T image contrast characteristics for daily clinical diagnosis, while benefiting from the increased signal to noise at higher field. Materials and Methods A total of 10 healthy subjects were scanned on 1.5-T and 3.0-T systems for T1 and T2 relaxation time measurements of major gray and white matter structures. The relaxation times were subsequently used to determine 3.0-T acquisition parameters for spin-echo (SE), T1 -weighted, fast spin echo (FSE) or turbo spin echo (TSE), T2 -weighted, and fluid-attenuated inversion recovery (FLAIR) pulse sequences that give image characteristics comparable to 1.5 T, to facilitate routine clinical diagnostics. Application of the routine clinical sequences was performed in 10 subjects, five normal subjects and five patients with various pathologies. Results T1 and T2 relaxation times were, respectively, 14% to 30% longer and 12% to 19% shorter at 3.0 T when compared to the values at 1.5 T, depending on the region evaluated. When using appropriate parameters, routine clinical images acquired at 3.0 T showed similar image characteristics to those obtained at 1.5 T, but with higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), which can be used to reduce the number of averages and scan times. Recommended imaging parameters for these sequences are provided. Conclusion When parameters are adjusted for changes in relaxation rates, routine clinical scans at 3.0 T can provide similar image appearance as 1.5 T, but with superior image quality and/or increased speed. J. Magn. Reson. Imaging 2005;22:13,22. © 2005 Wiley-Liss, Inc. [source]


Protein purification using chromatography: selection of type, modelling and optimization of operating conditions

JOURNAL OF MOLECULAR RECOGNITION, Issue 2 2009
J. A. Asenjo
Abstract To achieve a high level of purity in the purification of recombinant proteins for therapeutic or analytical application, it is necessary to use several chromatographic steps. There is a range of techniques available including anion and cation exchange, which can be carried out at different pHs, hydrophobic interaction chromatography, gel filtration and affinity chromatography. In the case of a complex mixture of partially unknown proteins or a clarified cell extract, there are many different routes one can take in order to choose the minimum and most efficient number of purification steps to achieve a desired level of purity (e.g. 98%, 99.5% or 99.9%). This review shows how an initial 'proteomic' characterization of the complex mixture of target protein and protein contaminants can be used to select the most efficient chromatographic separation steps in order to achieve a specific level of purity with a minimum number of steps. The chosen methodology was implemented in a computer- based Expert System. Two algorithms were developed, the first algorithm was used to select the most efficient purification method to separate a protein from its contaminants based on the physicochemical properties of the protein product and the protein contaminants and the second algorithm was used to predict the number and concentration of contaminants after each separation as well as protein product purity. The application of the Expert System approach was experimentally tested and validated with a mixture of four proteins and the experimental validation was also carried out with a supernatant of Bacillus subtilis producing a recombinant , -1,3-glucanase. Once the type of chromatography is chosen, optimization of the operating conditions is essential. Chromatographic elution curves for a three-protein mixture (, -lactoalbumin, ovalbumin and , -lactoglobulin), carried out under different flow rates and ionic strength conditions, were simulated using two different mathematical models. These models were the Plate Model and the more fundamentally based Rate Model. Simulated elution curves were compared with experimental data not used for parameter identification. Deviation between experimental data and the simulated curves using the Plate Model was less than 0.0189 (absorbance units); a slightly higher deviation [0.0252 (absorbance units)] was obtained when the Rate Model was used. In order to optimize operating conditions, a cost function was built that included the effect of the different production stages, namely fermentation, purification and concentration. This cost function was also successfully used for the determination of the fraction of product to be collected (peak cutting) in chromatography. It can be used for protein products with different characteristics and qualities, such as purity and yield, by choosing the appropriate parameters. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Structural design of composite nonlinear feedback control for linear systems with actuator constraint,

ASIAN JOURNAL OF CONTROL, Issue 5 2010
Weiyao Lan
Abstract The performance of the composite nonlinear feedback (CNF) control law relies on the selection of the linear feedback gain and the nonlinear function. However, it is a tough task to select an appropriate linear feedback gain and appropriate parameters of the nonlinear function because the general design procedure of CNF control just gives some simple guidelines for the selections. This paper proposes an operational design procedure based on the structural decomposition of the linear systems with input saturation. The linear feedback gain is constructed by two linear gains which are designed independently to stabilize the unstable zero dynamics part and the pure integration part of the system respectively. By investigating the influence of these two linear gains on transient performance, it is flexible and efficient to design a satisfactory linear feedback gain for the CNF control law. Moreover, the parameters of the nonlinear function are tuned automatically by solving a minimization problem. The proposed design procedure is illustrated by applying it to design a tracking control law for the inverted pendulum on a cart system. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]