Analysis Procedure (analysis + procedure)

Distribution by Scientific Domains

Kinds of Analysis Procedure

  • content analysis procedure
  • data analysis procedure
  • image analysis procedure


  • Selected Abstracts


    Structural MRI biomarkers for preclinical and mild Alzheimer's disease,

    HUMAN BRAIN MAPPING, Issue 10 2009
    Christine Fennema-Notestine
    Abstract Noninvasive MRI biomarkers for Alzheimer's disease (AD) may enable earlier clinical diagnosis and the monitoring of therapeutic effectiveness. To assess potential neuroimaging biomarkers, the Alzheimer's Disease Neuroimaging Initiative is following normal controls (NC) and individuals with mild cognitive impairment (MCI) or AD. We applied high-throughput image analyses procedures to these data to demonstrate the feasibility of detecting subtle structural changes in prodromal AD. Raw DICOM scans (139 NC, 175 MCI, and 84 AD) were downloaded for analysis. Volumetric segmentation and cortical surface reconstruction produced continuous cortical surface maps and region-of-interest (ROI) measures. The MCI cohort was subdivided into single- (SMCI) and multiple-domain MCI (MMCI) based on neuropsychological performance. Repeated measures analyses of covariance were used to examine group and hemispheric effects while controlling for age, sex, and, for volumetric measures, intracranial vault. ROI analyses showed group differences for ventricular, temporal, posterior and rostral anterior cingulate, posterior parietal, and frontal regions. SMCI and NC differed within temporal, rostral posterior cingulate, inferior parietal, precuneus, and caudal midfrontal regions. With MMCI and AD, greater differences were evident in these regions and additional frontal and retrosplenial cortices; evidence for non-AD pathology in MMCI also was suggested. Mesial temporal right-dominant asymmetries were evident and did not interact with diagnosis. Our findings demonstrate that high-throughput methods provide numerous measures to detect subtle effects of prodromal AD, suggesting early and later stages of the preclinical state in this cross-sectional sample. These methods will enable a more complete longitudinal characterization and allow us to identify changes that are predictive of conversion to AD. Hum Brain Mapp 2009. © 2009 Wiley-Liss, Inc. [source]


    Analysis of co-articulation regions for performance-driven facial animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2004
    Douglas Fidaleo
    Abstract A facial gesture analysis procedure is presented for the control of animated faces. Facial images are partitioned into a set of local, independently actuated regions of appearance change termed co-articulation regions (CRs). Each CR is parameterized by the activation level of a set of face gestures that affect the region. The activation of a CR is analyzed using independent component analysis (ICA) on a set of training images acquired from an actor. Gesture intensity classification is performed in ICA space by correlation to training samples. Correlation in ICA space proves to be an efficient and stable method for gesture intensity classification with limited training data. A discrete sample-based synthesis method is also presented. An artist creates an actor-independent reconstruction sample database that is indexed with CR state information analyzed in real time from video. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Tabu Search Strategies for the Public Transportation Network Optimizations with Variable Transit Demand

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2008
    Wei Fan
    A multi-objective nonlinear mixed integer model is formulated. Solution methodologies are proposed, which consist of three main components: an initial candidate route set generation procedure (ICRSGP) that generates all feasible routes incorporating practical bus transit industry guidelines; a network analysis procedure (NAP) that decides transit demand matrix, assigns transit trips, determines service frequencies, and computes performance measures; and a Tabu search method (TSM) that combines these two parts, guides the candidate solution generation process, and selects an optimal set of routes from the huge solution space. Comprehensive tests are conducted and sensitivity analyses are performed. Characteristics analyses are undertaken and solution qualities from different algorithms are compared. Numerical results clearly indicate that the preferred TSM outperforms the genetic algorithm used as a benchmark for the optimal bus transit route network design problem without zone demand aggregation. [source]


    Collapse of Reinforced Concrete Column by Vehicle Impact

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2008
    Hing-Ho Tsang
    The column slenderness ratio can be in the order of 6,9. Some of these buildings are right next to busy streets and hence continuously exposed to the potential hazard of a vehicle impacting on a column in an accident. In the early part of this study, the ultimate energy absorption capacity of a reinforced concrete column is compared to the kinetic energy embodied in the moving vehicle. The energy-absorption capacity is calculated from the force-displacement curve of the column as determined from a nonlinear static (push-over) analysis. The ultimate displacement of the column is defined at the point when the column fails to continue carrying the full gravitational loading. Results obtained from the nonlinear static analysis have been evaluated by computer simulations of the dynamic behavior of the column following the impact. Limitations in the static analysis procedure have been demonstrated. The effects of strain rate have been discussed and the sensitivity of the result to changes in the velocity function and stiffness of the impacting vehicle has also been studied. [source]


    The Scalasca performance toolset architecture

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2010
    Markus Geimer
    Abstract Scalasca is a performance toolset that has been specifically designed to analyze parallel application execution behavior on large-scale systems with many thousands of processors. It offers an incremental performance-analysis procedure that integrates runtime summaries with in-depth studies of concurrent behavior via event tracing, adopting a strategy of successively refined measurement configurations. Distinctive features are its ability to identify wait states in applications with very large numbers of processes and to combine these with efficiently summarized local measurements. In this article, we review the current toolset architecture, emphasizing its scalable design and the role of the different components in transforming raw measurement data into knowledge of application execution behavior. The scalability and effectiveness of Scalasca are then surveyed from experience measuring and analyzing real-world applications on a range of computer systems. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Four- and five-color flow cytometry analysis of leukocyte differentiation pathways in normal bone marrow: A reference document based on a systematic approach by the GTLLF and GEIL,,

    CYTOMETRY, Issue 1 2010
    Christine Arnoulet
    Abstract Background: The development of multiparameter flow cytometry (FCM) and increasingly sophisticated analysis software has considerably improved the exploration of hematological disorders. These tools have been widely applied in leukaemias, lymphomas, and myelodysplasias, yet with very heterogeneous approaches. Consequently, there is no extensive reference document reporting on the characteristics of normal human bone marrow (BM) in multiparameter FCM. Here, we report a reference analysis procedure using relevant antibody combinations in normal human BM. Methods: A first panel of 23 antibodies, constructed after literature review, was tested in four-color combinations (including CD45 in each) on 30 samples of BM. After evaluation of the data, a second set of 22 antibodies was further applied to another 35 BM samples. All list-modes from the 65 bone marrow samples were reviewed collectively. A systematised protocol for data analysis was established including biparametric representations and color codes for the three major lineages and undifferentiated cells. Results: This strategy has allowed to obtain a reference atlas of relevant patterns of differentiation antigens expression in normal human BM that is available within the European LeukemiaNet. This manuscript describes how this atlas was constructed. Conclusions: Both the strategy and atlas could prove very useful as a reference of normality, for the determination of leukemia-associated immunophenotypic patterns, analysis of myelodysplasia and, ultimately, investigation of minimal residual disease in the BM. © 2009 Clinical Cytometry Society [source]


    Linear analysis of concrete arch dams including dam,water,foundation rock interaction considering spatially varying ground motions

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 7 2010
    Jin-Ting Wang
    Abstract The available substructure method and computer program for earthquake response analysis of arch dams, including the effects of dam,water,foundation rock interaction and recognizing the semi-unbounded size of the foundation rock and fluid domains, are extended to consider spatial variations in ground motions around the canyon. The response of Mauvoisin Dam in Switzerland to spatially varying ground motion recorded during a small earthquake is analyzed to illustrate the results from this analysis procedure. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Inelastic spectra for infilled reinforced concrete frames

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 15 2004
    Matja
    Abstract In two companion papers a simplified non-linear analysis procedure for infilled reinforced concrete frames is introduced. In this paper a simple relation between strength reduction factor, ductility and period (R,µ,T relation) is presented. It is intended to be used for the determination of inelastic displacement ratios and of inelastic spectra in conjunction with idealized elastic spectra. The R,µ,T relation was developed from results of an extensive parametric study employing a SDOF mathematical model composed of structural elements representing the frame and infill. The structural parameters, used in the proposed R,µ,T relation, in addition to the parameters used in a usual (e.g. elasto-plastic) system, are ductility at the beginning of strength degradation, and the reduction of strength after the failure of the infills. Formulae depend also on the corner periods of the elastic spectrum. The proposed equations were validated by comparing results in terms of the reduction factors, inelastic displacement ratios, and inelastic spectra in the acceleration,displacement format, with those obtained by non-linear dynamic analyses for three sets of recorded and semi-artificial ground motions. A new approach was used for generating semi-artificial ground motions compatible with the target spectrum. This approach preserves the basic characteristics of individual ground motions, whereas the mean spectrum of the whole ground motion set fits the target spectrum excellently. In the parametric study, the R,µ,T relation was determined by assuming a constant reduction factor, while the corresponding ductility was calculated for different ground motions. The mean values proved to be noticeably different from the mean values determined based on a constant ductility approach, while the median values determined by the different procedures were between the two means. The approach employed in the study yields a R,µ,T relation which is conservative both for design and performance assessment (compared with a relation based on median values). Copyright © 2004 John Wiley & Sons, Ltd. [source]


    A modal pushover analysis procedure to estimate seismic demands for unsymmetric-plan buildings

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 8 2004
    Anil K. Chopra
    Abstract An Erratum has been published for this article in Earthquake Engng. Struct. Dyn. 2004; 33:1429. Based on structural dynamics theory, the modal pushover analysis (MPA) procedure retains the conceptual simplicity of current procedures with invariant force distribution, now common in structural engineering practice. The MPA procedure for estimating seismic demands is extended to unsymmetric-plan buildings. In the MPA procedure, the seismic demand due to individual terms in the modal expansion of the effective earthquake forces is determined by non-linear static analysis using the inertia force distribution for each mode, which for unsymmetric buildings includes two lateral forces and torque at each floor level. These ,modal' demands due to the first few terms of the modal expansion are then combined by the CQC rule to obtain an estimate of the total seismic demand for inelastic systems. When applied to elastic systems, the MPA procedure is equivalent to standard response spectrum analysis (RSA). The MPA estimates of seismic demand for torsionally-stiff and torsionally-flexible unsymmetric systems are shown to be similarly accurate as they are for the symmetric building; however, the results deteriorate for a torsionally-similarly-stiff unsymmetric-plan system and the ground motion considered because (a) elastic modes are strongly coupled, and (b) roof displacement is underestimated by the CQC modal combination rule (which would also limit accuracy of RSA for linearly elastic systems). Copyright © 2004 John Wiley & Sons, Ltd. [source]


    System identification of linear structures based on Hilbert,Huang spectral analysis.

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2003
    Part 2: Complex modes
    Abstract A method, based on the Hilbert,Huang spectral analysis, has been proposed by the authors to identify linear structures in which normal modes exist (i.e., real eigenvalues and eigenvectors). Frequently, all the eigenvalues and eigenvectors of linear structures are complex. In this paper, the method is extended further to identify general linear structures with complex modes using the free vibration response data polluted by noise. Measured response signals are first decomposed into modal responses using the method of Empirical Mode Decomposition with intermittency criteria. Each modal response contains the contribution of a complex conjugate pair of modes with a unique frequency and a damping ratio. Then, each modal response is decomposed in the frequency,time domain to yield instantaneous phase angle and amplitude using the Hilbert transform. Based on a single measurement of the impulse response time history at one appropriate location, the complex eigenvalues of the linear structure can be identified using a simple analysis procedure. When the response time histories are measured at all locations, the proposed methodology is capable of identifying the complex mode shapes as well as the mass, damping and stiffness matrices of the structure. The effectiveness and accuracy of the method presented are illustrated through numerical simulations. It is demonstrated that dynamic characteristics of linear structures with complex modes can be identified effectively using the proposed method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Response to three-component seismic motion of arbitrary direction

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2002
    Julio J. Hernández
    Abstract This paper presents a response spectrum analysis procedure for the calculation of the maximum structural response to three translational seismic components that may act at any inclination relative to the reference axes of the structure. The formula GCQC3, a generalization of the known CQC3-rule, incorporates the correlation between the seismic components along the axes of the structure and the intensity disparities between them. Contrary to the CQC3-rule where a principal seismic component must be vertical, in the GCQC3-rule all components can have any direction. Besides, the GCQC3-rule is applicable if we impose restrictions to the maximum inclination and/or intensity of a principal seismic component; in this case two components may be quasi-horizontal and the third may be quasi-vertical. This paper demonstrates that the critical responses of the structure, defined as the maximum and minimum responses considering all possible directions of incidence of one seismic component, are given by the square root of the maximum and minimum eigenvalues of the response matrix R, of order 3×3, defined in this paper; the elements of R are established on the basis of the modal responses used in the well-known CQC-rule. The critical responses to the three principal seismic components with arbitrary directions in space are easily calculated by combining the eigenvalues of R and the intensities of those components. The ratio rmax/rSRSS between the maximum response and the SRSS response, the latter being the most unfavourable response to the principal seismic components acting along the axes of the structure, is bounded between 1 and ,(3,a2/(,a2 + ,b2 + ,c2)), where ,a,,b,,c are the relative intensities of the three seismic components with identical spectral shape. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Significance testing of synergistic/antagonistic, dose level-dependent, or dose ratio-dependent effects in mixture dose-response analysis

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 10 2005
    Martijs J. Jonker
    Abstract In ecotoxicology, the state of the art for effect assessment of chemical mixtures is through multiple dose,response analysis of single compounds and their combinations. Investigating whether such data deviate from the reference models of concentration addition and/or independent action to identify overall synergism or antagonism is becoming routine. However, recent data show that more complex deviation patterns, such as dose ratio,dependent deviation and dose level,dependent deviation, need to be addressed. For concentration addition, methods to detect such deviation patterns exist, but they are stand-alone methods developed separately in literature, and conclusions derived from these analyses are therefore difficult to compare. For independent action, hardly any methods to detect such deviations from this reference model exist. This paper describes how these well-established mixture toxicity principles have been incorporated in a coherent data analysis procedure enabling detection and quantification of dose level,and dose ratio,specific synergism or antagonism from both the concentration addition and the independent action models. Significance testing of which deviation pattern describes the data best is carried out through maximum likelihood analysis. This analysis procedure is demonstrated through various data sets, and its applicability and limitations in mixture research are discussed. [source]


    A novel 2D-based approach to the discovery of candidate substrates for the metalloendopeptidase meprin

    FEBS JOURNAL, Issue 18 2008
    Daniel Ambort
    In the past, protease-substrate finding proved to be rather haphazard and was executed by in vitro cleavage assays using singly selected targets. In the present study, we report the first protease proteomic approach applied to meprin, an astacin-like metalloendopeptidase, to determine physiological substrates in a cell-based system of Madin,Darby canine kidney epithelial cells. A simple 2D IEF/SDS/PAGE-based image analysis procedure was designed to find candidate substrates in conditioned media of Madin,Darby canine kidney cells expressing meprin in zymogen or in active form. The method enabled the discovery of hitherto unkown meprin substrates with shortened (non-trypsin-generated) N- and C-terminally truncated cleavage products in peptide fragments upon LC-MS/MS analysis. Of 22 (17 nonredundant) candidate substrates identified, the proteolytic processing of vinculin, lysyl oxidase, collagen type V and annexin A1 was analysed by means of immunoblotting validation experiments. The classification of substrates into functional groups may propose new functions for meprins in the regulation of cell homeostasis and the extracellular environment, and in innate immunity, respectively. [source]


    Effective elastic properties of the double-periodically cracked plates

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 15 2005
    G. S. Wang
    Abstract In this paper, the interaction of double-periodical cracks is accurately solved based on the isolating analysis procedure, superposition principle, pseudo-traction method, Chebyshev polynomial expansion and crack-surface collocation technique. The jump displacement crossing crack faces, the average additional strain and therefore the effective compliance of the double-periodically cracked plate are directly determined. The numerical results for axial-symmetrically distributed double-periodical cracks, general double-periodical cracks with one collinear direction as well as two sets of double-periodical cracks with same size and square distribution are given in this paper. And the partial typical numerical results are compared with the previous works. The analysis shows that the anisotropy induced by the general double-periodical cracks is generally not orthogonal anisotropy. Only when the double-periodical cracks are axial-symmetrically distributed, is the anisotropy orthogonal. In this special cases, the effective engineering constants (consist of effective elastic modulus, the effective Poisson's ratio, the effective shear modulus) of cracked plate versus crack spacing, in the plane stress and plane strain conditions, respectively, are analysed. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Topology optimization for stationary fluid,structure interaction problems using a new monolithic formulation

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2010
    Gil Ho Yoon
    Abstract This paper outlines a new procedure for topology optimization in the steady-state fluid,structure interaction (FSI) problem. A review of current topology optimization methods highlights the difficulties in alternating between the two distinct sets of governing equations for fluid and structure dynamics (hereafter, the fluid and structural equations, respectively) and in imposing coupling boundary conditions between the separated fluid and solid domains. To overcome these difficulties, we propose an alternative monolithic procedure employing a unified domain rather than separated domains, which is not computationally efficient. In the proposed analysis procedure, the spatial differential operator of the fluid and structural equations for a deformed configuration is transformed into that for an undeformed configuration with the help of the deformation gradient tensor. For the coupling boundary conditions, the divergence of the pressure and the Darcy damping force are inserted into the solid and fluid equations, respectively. The proposed method is validated in several benchmark analysis problems. Topology optimization in the FSI problem is then made possible by interpolating Young's modulus, the fluid pressure of the modified solid equation, and the inverse permeability from the damping force with respect to the design variables. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    An integrated procedure for three-dimensional structural analysis with the finite cover method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2005
    Kenjiro Terada
    Abstract In this paper an integrated procedure for three-dimensional (3D) structural analyses with the finite cover method (FCM) is introduced. In the pre-process of this procedure, the geometry of a structure is modelled by 3D-CAD, followed by digitization to have the corresponding voxel model, and then the structure is covered by a union of mathematical covers, namely a mathematical mesh independently generated for approximation purposes. Since the mesh topology in the FCM does not need to conform to the physical boundaries of the structure, the mesh can be regular and structured. Thus, the numerical analysis procedure is free from the difficulties mesh generation typically poses and, in this sense, enables us to realize the mesh-free analysis. After formulating the FCM with interface elements for the static equilibrium state of a structure, we detail the procedure of the finite cover modelling, including the geometry modelling with 3D-CAD and the identification of the geometry covered by a regular mesh for numerical integration. Prior to full 3D modelling and analysis, we present a simple numerical example to confirm the equivalence of the performance of the FCM and that of the standard finite element method (FEM). Finally, representative numerical examples are presented to demonstrate the capabilities of the proposed analysis procedure. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Anisotropic adaptive simulation of transient flows using discontinuous Galerkin methods

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2005
    Jean-François Remacle
    Abstract An anisotropic adaptive analysis procedure based on a discontinuous Galerkin finite element discretization and local mesh modification of simplex elements is presented. The procedure is applied to transient two- and three-dimensional problems governed by Euler's equation. A smoothness indicator is used to isolate jump features where an aligned mesh metric field in specified. The mesh metric field in smooth portions of the domain is controlled by a Hessian matrix constructed using a variational procedure to calculate the second derivatives. The transient examples included demonstrate the ability of the mesh modification procedures to effectively track evolving interacting features of general shape as they move through a domain. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    A confirmatory factor analysis of the Brief Psychiatric Rating Scale in a homeless sample

    INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 4 2003
    Gary K. Burger
    Abstract This study used a confirmatory factor analysis procedure, the Oblique Multiple Group Method (OMG), with the Brief Psychiatric Rating Scale (BPRS) on a sample of homeless individuals who had both a severe mental illness and a substance use disorder. The hypothesized five-factor model of Guy (1976) accounted for 93% of the possible variance, and all the appropriate scales had their highest loading on their respective hypothesized factor. In addition, the Guy model accounted for more variance than did an alternative model. The five factors were labelled: thinking disorder, anergia, anxiety-depression, hostility-suspicion, and activity. Copyright © 2003 Whurr Publishers Ltd. [source]


    Multi-criteria analysis procedure for sustainable mobility evaluation in urban areas

    JOURNAL OF ADVANCED TRANSPORTATION, Issue 4 2009
    Vânia Barcellos Gouvêa Campos
    This paper proposes a procedure to evaluate sustainable mobility in urban areas. A set of indicators according to three dimensions of sustainability, i.e., environment, economics, and social aspects, are proposed to evaluate mobility in urban areas. The sustainable mobility evaluation is based on an Index calculated through a weighted multi-criteria combination procedure. A group of specialists in Brazil was involved in the development of the Index by defining the weights for the criteria. An application of the methodology in the city of Belo Horizonte, capital of the State of Minas Gerais, with 2.24 million inhabitants, is presented to validate the methodology. [source]


    Force Transmissibility Performance of Parallel Manipulators

    JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11 2003
    Wen-Tung Chang
    In this paper, a new force transmission index called the mean force transmission index (MFTI) is proposed, and the force transmissibility analysis procedure is established for parallel manipulators. The MFTI is an extended definition of the force transmission index (FTI) introduced by the authors previously. It is shown that the FTI is a function of the input velocity ratio (IVR) for a multi-DOF mechanism of the same configuration. To represent the force transmissibility by a definite value, the MFTI is defined as the mean value of the normalized FTIs function over the whole range of the IVR. The force transmissibility analysis of two planar parallel manipulators is illustrated using the MFTI method. The result is compared with that of the Jacobian matrix method and the joint force index (JFI) method. It shows that, especially for symmetric parallel manipulators, an approximate inverse-proportionality relationship exists between the JFI and MFTI, and between the maximum input torque/force and MFTI. It is concluded that the MFTI can be used as a quantitative measure of the force transmissibility performance for parallel manipulators. In the end, a design optimization problem is studied by taking the global force transmission index as the objective function. © 2003 Wiley Periodicals, Inc. [source]


    Children's Life Transition Following Sexual Abuse

    JOURNAL OF FORENSIC NURSING, Issue 4 2006
    Jacqueline Hatlevig
    The life transition of children 6 to 13 years old was studied for 1 to 3 years following sexual abuse. Data included transcripts from in-depth interviews about the children's daily living experiences and drawings were analyzed using the ADOPT analysis procedure. Implications for nursing include the use of drawings as a research technique and the effectiveness of strategies used by participants to manage the aftermath of the trauma. [source]


    An improved independent component regression modeling and quantitative calibration procedure

    AICHE JOURNAL, Issue 6 2010
    Chunhui Zhao
    Abstract An improved independent component regression (M-ICR) algorithm is proposed by constructing joint latent variable (LV) based regressors, and a quantitative statistical analysis procedure is designed using a bootstrap technique for model validation and performance evaluation. First, the drawbacks of the conventional regression modeling algorithms are analyzed. Then the proposed M-ICR algorithm is formulated for regressor design. It constructs a dual-objective optimization criterion function, simultaneously incorporating quality-relevance and independence into the feature extraction procedure. This ties together the ideas of partial-least squares (PLS), and independent component regression (ICR) under the same mathematical umbrella. By adjusting the controllable suboptimization objective weights, it adds insight into the different roles of quality-relevant and independent characteristics in calibration modeling, and, thus, provides possibilities to combine the advantages of PLS and ICR. Furthermore, a quantitative statistical analysis procedure based on a bootstrapping technique is designed to identify the effects of LVs, determine a better model rank and overcome ill-conditioning caused by model over-parameterization. A confidence interval on quality prediction is also approximated. The performance of the proposed method is demonstrated using both numerical and real world data. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


    Validated assay for quantification of oxcarbazepine and its active dihydro metabolite 10-hydroxycarbazepine in plasma by atmospheric pressure chemical ionization liquid chromatography/mass spectrometry

    JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 7 2002
    Hans H. Maurer
    Abstract Oxcarbazepine (OX), a new antiepileptic, may lead to unwanted side-effects or even life-threatening intoxications after overdose. Therefore, a validated liquid chromatographic/mass spectrometric (LC/MS) assay was developed for the quantification of OX and its pharmacologically active dihydro metabolite (dihydrooxcarbazepine, DOX, often named 10-hydroxycarbazepine). OX and DOX were extracted from plasma by the authors' standard liquid/liquid extraction and were separated on a Merck LiChroCART column with Superspher 60 RP Select B as the stationary phase. Gradient elution was performed using aqueous ammonium formate and acetonitrile. The compounds were quantified in the selected-ion monitoring mode using atmospheric pressure chemical ionization electrospray LC/MS. The assay was fully validated. It was found to be selective. The calibration curves were linear from 0.1 to 50 mg l,1 for OX and DOX. Limits of quantification were 0.1 mg l,1 for OX and DOX. The absolute recoveries were between 60 and 86%. The accuracy and precision data were within the required limits. The analytes in frozen plasma samples were stable for at least 1 month. The method was successfully applied to several authentic plasma samples from patients treated or intoxicated with OX. The measured therapeutic plasma levels ranged from 1 to 2 mg l,1 for OX and from 10 to 40 mg l,1 for DOX. The validated LC/MS assay proved to be appropriate for quantification of OX and DOX in plasma for clinical toxicology and therapeutic drug monitoring purposes. The assay is part of a general analysis procedure for the isolation, separation and quantification of various drugs and for their full-scan screening and identification. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Anelastic Behavior of Plasma-Sprayed Zirconia Coatings

    JOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 12 2008
    Yajie Liu
    Low-temperature thermal cycling of plasma-sprayed zirconia coatings reveals unique mechanical responses in their curvature measurements, namely nonlinear and cyclic hysteresis, collectively termed as anelastic. These features arise from the inherent layered, porous, and cracked morphology of thermal-sprayed ceramic materials. In this paper, the mechanisms of anelasticity are characterized by crack closure and frictional sliding models, and stress,strain relations of various thermal-sprayed zirconia coatings were determined via an inverse analysis procedure. These results demonstrate process conditions such as powder morphology and spray parameters significantly influence the mechanical behaviors of coatings. The unique anelastic responses can be used as valuable parameters in identifying coating quality as well as process reliability in manufacturing. [source]


    Analysis of longitudinal data with drop-out: objectives, assumptions and a proposal

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 5 2007
    Peter Diggle
    Summary. The problem of analysing longitudinal data that are complicated by possibly informative drop-out has received considerable attention in the statistical literature. Most researchers have concentrated on either methodology or application, but we begin this paper by arguing that more attention could be given to study objectives and to the relevant targets for inference. Next we summarize a variety of approaches that have been suggested for dealing with drop-out. A long-standing concern in this subject area is that all methods require untestable assumptions. We discuss circumstances in which we are willing to make such assumptions and we propose a new and computationally efficient modelling and analysis procedure for these situations. We assume a dynamic linear model for the expected increments of a constructed variable, under which subject-specific random effects follow a martingale process in the absence of drop-out. Informal diagnostic procedures to assess the tenability of the assumption are proposed. The paper is completed by simulations and a comparison of our method and several alternatives in the analysis of data from a trial into the treatment of schizophrenia, in which approximately 50% of recruited subjects dropped out before the final scheduled measurement time. [source]


    Application of functional group modified substrate in room temperature phosphorescence, II,heavy atom-chelated filter paper for selective determination of , -naphthalene acetic acid

    LUMINESCENCE: THE JOURNAL OF BIOLOGICAL AND CHEMICAL LUMINESCENCE, Issue 4-5 2005
    Ruohua Zhu
    Abstract Heavy atom-chelated filter paper was synthesized and used as the substrate for room-temperature phosphorescence (RTP). The synthesis conditions for chelated paper were studied. The Pb-chelated filter paper could selectively induce the RTP of , -naphthalene acetic acid (, -NAA). The excitation and emission wavelengths of RTP of , -NAA were 300 nm and 521 nm, respectively. The concentration of , -NAA was linear with the RTP intensity in the range 2 × 10,6,6 × 10,4 mol/L (correlation coefficient, 0.9999). The concentration detection limit was 1.35 × 10,7 mol/L and the absolute detection limit was 0.25 ng/spot. The RSD (n = 10) was 1.7%. The method was applied to the analysis of water and vegetable samples with satisfactory results. Because the heavy atom was directly chelated onto the filter paper, the heavy-atom effect on the RTP of NAA was further increased and the analysis procedure was simple, fast and economical. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Photometric properties and scaling relations of early-type Brightest Cluster Galaxies

    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008
    F. S. Liu
    ABSTRACT We investigate the photometric properties of the early-type Brightest Cluster Galaxies (BCGs) using a carefully selected sample of 85 BCGs from the C4 cluster catalogue with a redshift of less than 0.1. We perform accurate background subtractions and surface photometry for these BCGs to 25 mag arcsec,2 in the Sloan r band. By quantitatively analysing the gradient of the Petrosian profiles of BCGs, we find that a large fraction of BCGs have extended stellar envelopes in their outskirts; more luminous BCGs tend to have more extended stellar haloes that are likely to be connected with mergers. A comparison sample of elliptical galaxies was chosen with similar apparent magnitude and redshift ranges, for which the same photometric analysis procedure is applied. We find that BCGs have steeper size,luminosity (R,L,) and Faber,Jackson (L,,,) relations than the bulk of early-type galaxies. Furthermore, the power-law indices (, and ,) in these relations increase as the isophotal limits become deeper. For isophotal limits from 22 to 25 mag arcsec,2, BCGs are usually larger than the bulk of early-type galaxies, and a large fraction (,49 per cent) of BCGs have discy isophotal shapes. The differences in the scaling relations are consistent with a scenario where the dynamical structure and formation route of BCGs may be different from the bulk of early-type galaxies; in particular dry (dissipationless) mergers may play a more important role in their formation. We highlight several possible dry merger candidates in our sample. [source]


    Obama on the Stump: Features and Determinants of a Rhetorical Approach

    PRESIDENTIAL STUDIES QUARTERLY, Issue 3 2010
    KEVIN COE
    From the moment Barack Obama entered the national political scene in 2004, his formidable rhetorical skills were a central component of his public persona and his political success. Not surprisingly, a growing body of research has examined Obama's rhetorical techniques. Thus far, however, these studies have consisted almost entirely of qualitative analyses of single speeches, making it difficult to generalize about the broader features of Obama's rhetorical approach and impossible to understand the determinants of his rhetorical choices. This study fills these gaps in the literature by systematically tracking Obama's rhetoric over the course of campaign 2008 and testing competing explanations for the variation that occurs during this period. Using a unique computer-assisted content analysis procedure that draws coding categories directly from the more than 11,500 distinct words that Obama used during his campaign, the authors analyze 183 speeches and debates from his announcement of candidacy in February 2007 to his victory speech in November 2008. Obama's campaign rhetoric varied by speaking context, geography, and poll position, indicating a twofold rhetorical approach of emphasizing policy and thematic appeals while downplaying more contentious issues. [source]


    Dynamische Steifigkeit und Dämpfung von Pfahlgruppen

    BAUTECHNIK, Issue 2 2009
    Hamid Sadegh-Azar Dr.-Ing.
    Geotechnik; Bodenmechanik Abstract In diesem Beitrag werden die dynamische Steifigkeit und Dämpfung von Pfahlgruppen und ihre Auswirkung auf die Auslegung und Wirtschaftlichkeit der Pfahlgründung selbst und der Bauwerke darauf untersucht. Die Berechungen werden mit der sogenannten "Thin-Layer-Method", einer sehr leistungsfähigen Berechnungsmethode im Frequenzbereich, durchgeführt (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) Dynamic stiffness and damping properties of pile groups. The dynamic stiffness and damping properties of pile-groups have been investigated in this paper. Also, the substantial influence of these properties on an economic structure and foundation design is demonstrated. The analysis has been carried out using the "Thin-Layer-Method", which is a very efficient and powerful analysis procedure in frequency domain. [source]


    N-linked glycosylation is an important parameter for optimal selection of cell lines producing biopharmaceutical human IgG

    BIOTECHNOLOGY PROGRESS, Issue 1 2009
    Patrick H. C. van Berkel
    Abstract We studied the variations in N-linked glycosylation of human IgG molecules derived from 105 different stable cell lines each expressing one of the six different antibodies. Antibody expression was based on glutamine synthetase selection technology in suspension growing CHO-K1SV cells. The glycans detected on the Fc fragment were mainly of the core-fucosylated complex type containing zero or one galactose and little to no sialic acid. The glycosylation was highly consistent for the same cell line when grown multiple times, indicating the robustness of the production and glycan analysis procedure. However, a twofold to threefold difference was observed in the level of galactosylation and/or non-core-fucosylation between the 105 different cell lines, suggesting clone-to-clone variation. These differences may change the Fc-mediated effector functions by such antibodies. Large variation was also observed in the oligomannose-5 glycan content, which, when present, may lead to undesired rapid clearance of the antibody in vivo. Statistically significant differences were noticed between the various glycan parameters for the six different antibodies, indicating that the variable domains and/or light chain isotype influence Fc glycosylation. The glycosylation altered when batch production in shaker was changed to fed-batch production in bioreactor, but was consistent again when the process was scaled from 400 to 5,000 L. Taken together, the observed clone-to-clone glycosylation variation but batch-to-batch consistency provides a rationale for selection of optimal production cell lines for large-scale manufacturing of biopharmaceutical human IgG. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009 [source]