Iterative

Distribution by Scientific Domains
Distribution within Engineering

Terms modified by Iterative

  • iterative algorithm
  • iterative algorithms
  • iterative approach
  • iterative calculation
  • iterative cycle
  • iterative fashion
  • iterative learning control
  • iterative manner
  • iterative method
  • iterative methods
  • iterative procedure
  • iterative process
  • iterative scheme
  • iterative solution
  • iterative solution methods
  • iterative solver
  • iterative strategy
  • iterative technique

  • Selected Abstracts


    Water sorption kinetics in light-cured poly-HEMA and poly(HEMA- co -TEGDMA); determination of the self-diffusion coefficient by new iterative methods

    JOURNAL OF APPLIED POLYMER SCIENCE, Issue 4 2007
    Irini D. Sideridou
    Abstract The present investigation is concerned with the determination of self-diffusion coefficient (D) of water in methacrylate-based biomaterials following Fickian sorption by two new methods: the Iterative and the Graphical methods. The D value is traditionally determined by means of the initial slope of the corresponding sorption curve and the so-called Stefan's approximation. The proposed methods using equations without approximations and data resulting from the whole sorption range reach to accurate values of D, even when the sorption curve does not present an initial linear portion. In addition to D, the Graphical method allows the extrapolation of the mass of the sorbed water at equilibrium (M,), even when the equilibrium specimen's mass fluctuates around its limited value (m,). The test of the proposed procedures by means of ideal and Monte Carlo simulated data revealed that these methods are fairly applicable. The obtained D values compared with those determined by means of the Stephan's method revealed that the proposed methods provide more accurate results. Finally, the proposed methods were successfully applied to the experimental determination of the diffusion coefficient of water (50°C) in the homopolymer of 2-hydroxyethyl methacrylate (HEMA) and in the copolymer of HEMA with triethylene glycol dimethacrylate (98/2 mol/mol). These polymers were prepared by light curing (, = 470 nm) at room temperature in presence of camphorquinone and N,N -dimethylaminoethyl methacrylate as initiator. © 2007 Wiley Periodicals, Inc. J Appl Polym Sci 2007 [source]


    Maximum likelihood fitting using ordinary least squares algorithms,

    JOURNAL OF CHEMOMETRICS, Issue 8-10 2002
    Rasmus Bro
    Abstract In this paper a general algorithm is provided for maximum likelihood fitting of deterministic models subject to Gaussian-distributed residual variation (including any type of non-singular covariance). By deterministic models is meant models in which no distributional assumptions are valid (or applied) on the parameters. The algorithm may also more generally be used for weighted least squares (WLS) fitting in situations where either distributional assumptions are not available or other than statistical assumptions guide the choice of loss function. The algorithm to solve the associated problem is called MILES (Maximum likelihood via Iterative Least squares EStimation). It is shown that the sought parameters can be estimated using simple least squares (LS) algorithms in an iterative fashion. The algorithm is based on iterative majorization and extends earlier work for WLS fitting of models with heteroscedastic uncorrelated residual variation. The algorithm is shown to include several current algorithms as special cases. For example, maximum likelihood principal component analysis models with and without offsets can be easily fitted with MILES. The MILES algorithm is simple and can be implemented as an outer loop in any least squares algorithm, e.g. for analysis of variance, regression, response surface modeling, etc. Several examples are provided on the use of MILES. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Iterative versus direct parallel substructuring methods in semiconductor device modelling

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2005
    L. Giraud
    Abstract The numerical simulation of semiconductor devices is extremely demanding in term of computational time because it involves complex embedded numerical schemes. At the kernel of these schemes is the solution of very ill-conditioned large linear systems. In this paper, we present the various ingredients of some hybrid iterative schemes that play a central role in the robustness of these solvers when they are embedded in other numerical procedures. On a set of two-dimensional unstructured mixed finite element problems representative of semiconductor simulation, we perform a fair and detailed comparison between parallel iterative and direct linear solution techniques. We show that iterative solvers can be robust enough to solve the very challenging linear systems that arise in those simulations. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    A reduced-order modeling technique for tall buildings with active tuned mass damper

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2001
    Zu-Qing Qu
    Abstract It is impractical to install sensors on every floor of a tall building to measure the full state vector because of the large number of degrees of freedom. This makes it necessary to introduce reduced-order control. A kind of system reduction scheme (dynamic condensation method) is proposed in this paper. This method is iterative and Guyan condensation is looked upon as an initial approximation of the iteration. Since the reduced-order system is updated repeatedly until a desired one is obtained, the accuracy of the reduced-order system resulting from the proposed method is much higher than that obtained from the Guyan condensation method. Another advantage of the method is that the reduced-order system is defined in the subspace of the original physical space, which makes the state vectors have physical meaning. An eigenvalue shifting technique is applied to accelerate the convergence of iteration and to make the reduced system retain all the dynamic characteristics of the full system within a given frequency range. Two schemes to establish the reduced-order system by using the proposed method are also presented and discussed in this paper. The results for a tall building with active tuned mass damper show that the proposed method is efficient for the reduced-order modelling and the accuracy is very close to exact only after two iterations. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    An Improved Synthesis of Procyanidin Dimers: Regio- and Stereocontrol of the Interflavan Bond

    EUROPEAN JOURNAL OF ORGANIC CHEMISTRY, Issue 23 2006
    Isabelle Tarascou
    Abstract A direct and general synthesis of procyanidin dimers B1, B2, B3 and B4 (10a,d) is presented. The approach is based on the stoichiometric coupling of two protected monomeric units (the nucleophilic 2a,b and electrophilic 4a,b partners) and deals with the regio- and stereocontrol of the C4,C8 interflavan bond as well as the control of the degree of oligomerization. The synthesis involves a five-step pathway starting from the native catechin (1a) or epicatechin (1b) to the fully deprotected dimers 10a,d. Furthermore, the process appears to be iterative as the coupling intermediates 9a,d themselves can be readily used in further selective syntheses of trimers or higher oligomers. (© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2006) [source]


    Graphical models for coded data transmission over inter-symbol interference channels

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2004
    Michael Tüchler
    We derive graphical models for coded data transmission over channels introducing inter-symbol interference. These models are factor graph descriptions of the transmitter section of the communication system, which serve at the same time as a framework to define the corresponding receiver. The graph structure governs the complexity and nature (e.g. non-iterative, iterative) of the receiver algorithm. A particular graph yields several algorithms optimizing various cost functions depending on the choice of messages communicated along the edges of the graph. We study these different outcomes of message passing and how the corresponding receiver algorithms are related to existing ones. We also devise strategies to find suitable graphs for communication problems of interest. Copyright © 2004 AEI [source]


    Construction, analysis and performance of generalised woven codes

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 2 2004
    Martin Bossert
    Generalised woven codes (WC) are constructed by combining the woven code structure with the idea of generalised concatenated codes, also known as multi-level codes. The required nested inner convolutional code is analysed. The encoder structure of this new class of codes is described and fundamental code parameters are derived. It is shown that generalised WC have a free distance which is superior to that of comparable WC. Several iterative and non-iterative decoding strategies are discussed. It is shown that the decoding complexity of the nested inner code is not larger than the decoding complexity of its mother code. Finally, bit error rates obtained from simulations are discussed and compared with other code structures like WC. Copyright © 2004 AEI [source]


    A reasoning method for a ship design expert system

    EXPERT SYSTEMS, Issue 2 2005
    Sebnem Helvacioglu
    Abstract: The ship design process is a highly data-oriented, dynamic, iterative and multi-stage algorithm. It utilizes multiple abstraction levels and concurrent engineering techniques. Specialized techniques for knowledge acquisition, knowledge representation and reasoning must be developed to solve these problems for a ship design expert system. Consequently, very few attempts have been made to model the ship design process using an expert system approach. The current work investigates a knowledge representation,reasoning technique for such a purpose. A knowledge-based conceptual design was developed by utilizing a prototype approach and hierarchical decompositioning. An expert system program called ALDES (accommodation layout design expert system) was developed by using the CLIPS expert system shell and an object-oriented user interface. The reasoning and knowledge representation methods of ALDES are explained in the paper. An application of the method is given for the general arrangement design of a containership. [source]


    The application of knowledge discovery in databases to post-marketing drug safety: example of the WHO database

    FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 2 2008
    A. Bate
    Abstract After market launch, new information on adverse effects of medicinal products is almost exclusively first highlighted by spontaneous reporting. As data sets of spontaneous reports have become larger, and computational capability has increased, quantitative methods have been increasingly applied to such data sets. The screening of such data sets is an application of knowledge discovery in databases (KDD). Effective KDD is an iterative and interactive process made up of the following steps: developing an understanding of an application domain, creating a target data set, data cleaning and pre-processing, data reduction and projection, choosing the data mining task, choosing the data mining algorithm, data mining, interpretation of results and consolidating and using acquired knowledge. The process of KDD as it applies to the analysis of spontaneous reports can be exemplified by its routine use on the 3.5 million suspected adverse drug reaction (ADR) reports in the WHO ADR database. Examples of new adverse effects first highlighted by the KDD process on WHO data include topiramate glaucoma, infliximab vasculitis and the association of selective serotonin reuptake inhibitors (SSRIs) and neonatal convulsions. The KDD process has already improved our ability to highlight previously unsuspected ADRs for clinical review in spontaneous reporting, and we anticipate that such techniques will be increasingly used in the successful screening of other healthcare data sets such as patient records in the future. [source]


    High-resolution seismic imaging in deep sea from a joint deep-towed/OBH reflection experiment: application to a Mass Transport Complex offshore Nigeria

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010
    S. Ker
    SUMMARY We assess the feasibility of high-resolution seismic depth imaging in deep water based on a new geophysical approach involving the joint use of a deep-towed seismic device (SYSIF) and ocean bottom hydrophones (OBHs). Source signature measurement enables signature deconvolution to be used to improve the vertical resolution and signal-to-noise ratio. The source signature was also used to precisely determine direct traveltimes that were inverted to relocate source and receiver positions. The very high accuracy of the positioning that was obtained enabled depth imaging and a stack of the OBH data to be performed. The determination of the P -wave velocity distribution was realized by the adaptation of an iterative focusing approach to the specific acquisition geometry. This innovative experiment combined with advanced processing succeeded in reaching lateral and vertical resolution (2.5 and 1 m) in accordance with the objectives of imaging fine scale structures and correlation with in situ measurements. To illustrate the technological and processing advances of the approach, we present a first application performed during the ERIG3D cruise offshore Nigeria with the seismic data acquired over NG1, a buried Mass Transport Complex (MTC) interpreted as a debris flow by conventional data. Evidence for a slide nature of a part of the MTC was provided by the high resolution of the OBH depth images. Rigid behaviour may be inferred from movement of coherent material inside the MTC and thrust structures at the base of the MTC. Furthermore, a silt layer that was disrupted during emplacement but has maintained its stratigraphic position supports a short transport distance. [source]


    A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004
    Colin G. Farquharson
    SUMMARY Two automatic ways of estimating the regularization parameter in underdetermined, minimum-structure-type solutions to non-linear inverse problems are compared: the generalized cross-validation and L-curve criteria. Both criteria provide a means of estimating the regularization parameter when only the relative sizes of the measurement uncertainties in a set of observations are known. The criteria, which are established components of linear inverse theory, are applied to the linearized inverse problem at each iteration in a typical iterative, linearized solution to the non-linear problem. The particular inverse problem considered here is the simultaneous inversion of electromagnetic loop,loop data for 1-D models of both electrical conductivity and magnetic susceptibility. The performance of each criteria is illustrated with inversions of a variety of synthetic and field data sets. In the great majority of examples tested, both criteria successfully determined suitable values of the regularization parameter, and hence credible models of the subsurface. [source]


    Velocity/interface model building in a thrust belt by tomographic inversion of global offset seismic data

    GEOPHYSICAL PROSPECTING, Issue 1 2003
    P. Dell'Aversana
    Between September and November 1999, two test seismic lines were recorded in the southern Apennine region of southern Italy using the global offset technique, which involves the acquisition of a wide offset range using two simultaneously active seismic spreads. One consisted of a symmetrical spread moving along the line, with geophone arrays every 30 m and a maximum offset of 3.6 km. The other one consisted of fixed geophone arrays every 90 m with a maximum offset of 18 km. This experimental acquisition project was carried out as part of the enhanced seismic in thrust belt (ESIT) research project, funded by the European Union, Enterprise Oil and Eni-Agip. An iterative and interactive tomographic inversion of refraction/reflection arrivals was carried out on the data from line ESIT700 to produce a velocity/interface model in depth, which used all the available offsets. The tomographic models allowed the reconstruction of layer interface geometries and interval velocities for the target carbonate platform (Apula) and the overburden sequence. The value of this technique is highlighted by the fact that the standard approach, based on near-vertical reflection seismic and a conventional processing flow, produced poor seismic images in both stack and migrated sections. [source]


    The key-group method

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 6 2003
    A. R. Yarahmadi Bafghi
    Abstract This paper proposes an extension to the key-block method, called ,key-group method', that considers not only individual key blocks but also groups of collapsable blocks into an iterative and progressive analysis of the stability of discontinuous rock slopes. The basics of the key-block method are recalled herein and then used to prove how key groups can be identified. We reveal that a key group must contain at least one basic key block, yet this condition is not entirely sufficient. The second block candidate for grouping must be another key block or a block whose movement-preventing faces are common to one or more single key blocks. We also show that the proposed method yields more realistic results than the basic key-block method and a comparison with results obtained using a distinct element analysis demonstrates the ability of this new method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Application of radial basis meshless methods to direct and inverse biharmonic boundary value problems

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2005
    Jichun Li
    Abstract In this paper, we develop a non-iterative way to solve biharmonic boundary value problems by using a radial basis meshless method. This is an original application of meshless method to solving inverse problems without any iteration, since traditional numerical methods for inverse boundary value problems mainly are iterative and hence very time-consuming. Numerical examples are presented for inverse biharmonic boundary value problems and corresponding direct problems, since solving direct problems is a preliminary step for inverse problems. All our examples of direct and inverse problems are solved within seconds in CPU time on a standard PC, which makes our proposed technique a great potential candidate for wide-spread applications to other inverse problems. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Cost optimization of composite floors using neural dynamics model

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2001
    Hojjat Adeli
    Abstract The design of composite beams is complicated and highly iterative. Depending on the design parameters a beam can be fully composite or partially composite. In the case of design on the basis of the American Institute of Steel Construction (AISC) Load and Resistance Factor Design (LRFD) one has to consider the plastic deformations. As pointed out by Lorenz, the real advantage of the LRFD code can be realized in the minimum cost design. In this article, we present a general formulation for the cost optimization of composite beams based on the AISC LRFD specifications by including the costs of (a) concrete, (b) steel beam, and (c) shear studs. The problem is formulated as a mixed integer-discrete non-linear programming problem and solved by the recently patented neural dynamics model of Adeli and Park (U.S. patent 5,815,394 issued on September 29, 1998). It is shown that use of the cost optimization algorithm presented in this article results in substantial cost savings. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Numerical aspects of a real-time sub-structuring technique in structural dynamics

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2007
    R. Sajeeb
    Abstract A time domain coupling technique, involving combined computational and experimental modelling, for vibration analysis of structures built-up of linear/non-linear substructures is developed. The study permits, in principle, one or more of the substructures to be modelled experimentally with measurements being made only on the interfacial degrees of freedom. The numerical and experimental substructures are allowed to communicate in real time within the present framework. The proposed strategy involves a two-stage scheme: the first is iterative in nature and is implemented at the initial stages of the solution in a non-real-time format; the second is non-iterative, employs an extrapolation scheme and proceeds in real time. Issues on time delays during communications between different substructures are discussed. An explicit integration procedure is shown to lead to solutions with high accuracy while retaining path sensitivity to initial conditions. The stability of the integration scheme is also discussed and a method for numerically dissipating the temporal growth of high-frequency errors is presented. For systems with non-linear substructures, the integration procedure is based on a multi-step transversal linearization method; and, to account for time delays, we employ a multi-step extrapolation scheme based on the reproducing kernel particle method. Numerical illustrations on a few low-dimensional vibrating structures are presented and these examples are fashioned after problems of seismic qualification testing of engineering structures using real-time substructure testing techniques. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Numerical accuracy of a Padé-type non-reflecting boundary condition for the finite element solution of acoustic scattering problems at high-frequency

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2005
    R. Kechroud
    Abstract The present text deals with the numerical solution of two-dimensional high-frequency acoustic scattering problems using a new high-order and asymptotic Padé-type artificial boundary condition. The Padé-type condition is easy-to-implement in a Galerkin least-squares (iterative) finite element solver for arbitrarily convex-shaped boundaries. The method accuracy is investigated for different model problems and for the scattering problem by a submarine-shaped scatterer. As a result, relatively small computational domains, optimized according to the shape of the scatterer, can be considered while yielding accurate computations for high-frequencies. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Forced vibrations in the medium frequency range solved by a partition of unity method with local information

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 2005
    E. De Bel
    Abstract A new approach for the computation of the forced vibrations up to the medium frequency range is formulated for thin plates. It is based on the partition of unity method (PUM), first proposed by Babu,ka, and used here to solve the elastodynamic problem. The paper focuses on the introduction of local information in the basis of the PUM coming from previous approximations, in order to enhance the accuracy of the solution. The method may be iterative and generates a PUM approximation leading to smaller models compared with the finite element ones required for a same accuracy level. It shows very promising results, in terms of frequency range, accuracy and computational time. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Verification testing in computational fluid dynamics: an example using Reynolds-averaged Navier,Stokes methods for two-dimensional flow in the near wake of a circular cylinder

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2003
    Jennifer Richmond-Bryant
    Abstract Verification testing was performed for various Reynolds-averaged Navier,Stokes methods for uniform flow past a circular cylinder at Re= 5232. The standard and renormalized group (RNG) versions of the k,, method were examined, along with the Boussinesq, Speziale and Launder constitutive relationships. Wind tunnel experiments for flow past a circular cylinder were also performed to obtain a comparative data set. Preliminary studies demonstrate poor convergence for the Speziale relationship. Verification testing with the standard and RNG k,, models suggests that the simulations exhibit global monotonic convergence for the Boussinesq models. However, the global order of accuracy of the methods was much lower than the expected order of accuracy of 2. For this reason, pointwise convergence ratios and orders of accuracy were computed to show that not all sampling locations had converged (standard k,, model: 19% failed to converge; RNG k,, model: 14% failed to converge). When the non-convergent points were removed from consideration, the average orders of accuracy are closer to the expected value (standard k,, model: 1.41; RNG k,, model: 1.27). Poor iterative and global grid convergence was found for the RNG k,,/Launder model. The standard and RNG k,, models with the Boussinesq relationship were compared with experimental data and yielded results significantly different from the experiments. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    An exploratory study of the influences that compromise the sun protection of young adults

    INTERNATIONAL JOURNAL OF CONSUMER STUDIES, Issue 6 2008
    Ngaia Calder
    Abstract This paper reports on an exploratory research project designed to gain a deeper understanding of the influences on ultraviolet radiation (UVR) behaviours among high-risk young adults to determine what compromises the adoption of protection measures for this group. A dual approach using focus groups and the Zaltman Metaphor Elicitation Technique was used to provide personal narratives related to UVR behaviour for tertiary education students. Results from both ,conversations' were content-analysed using an iterative ,bootstrapping' technique to identify key themes and issues. This exploratory research identified a number of key themes including effect on mood, influence of culture, the value of tans, unrealistic optimism, risk-orientation, and the role of experience. This group felt that they not been targeted effectively by public health campaigns and did not fully understand the dangers of high-risk UVR behaviours. Although a number of previous studies have investigated the relationship between knowledge and behaviour, and largely concluded that increases in knowledge do not lead to increases in adoption of protection practices, the preliminary findings of this study reveal that the knowledge and perceived self efficacy of protective practices is extremely high, what is lacking is the perceived threat and thus the motivations to adopt such behaviours. The conclusions drawn from this research indicate that there are a variety of important influencing factors that compromise UVR behaviours, in particular, the lack of perceived seriousness and severity towards long term consequences such as skin cancer. The recommendation to address the imbalance of ,perceived threat' and ,outcome expectations' is to focus on increasing knowledge of skin cancer, particularly susceptibility to skin cancer and the severity of the condition. [source]


    Monte Carlo modelling of abrupt InP/InGaAs HBTs

    INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 4 2003
    Pau Garcias-Salvá
    Abstract In this paper a Monte Carlo simulator which is focused on the modelling of abrupt heterojunction bipolar transistors (HBTs) is described. In addition, simulation results of an abrupt InP/InGaAs HBT are analysed in order to describe the behaviour of this kind of device, and are compared with experimental data. A distinctive feature of InP/InGaAs HBTs is their spike-like discontinuity in the Ec level at the emitter,base heterojunction interface. The transport of electrons through this potential barrier can be described by the Schrödinger's equation. Therefore, in our simulator we have consistently included the numerical solution of this equation in the iterative Monte Carlo procedure. The simulation results of the transistor include the density of electrons along the device and their velocity, kinetic energy and occupation of the upper conduction sub-bands. It is shown that the electrons in the base region and in the base,collector depletion region are far from thermal equilibrium, and therefore the drift,diffusion transport model is no longer applicable. Finally, the experimental and simulated Gummel plots JC(VBE) and JB(VBE) are compared in the bias range of common operation of these transistors, showing a good data agreement. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    False Promises: The Tobacco Industry, "Low Tar" Cigarettes, and Older Smokers

    JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 9 2008
    Janine K. Cataldo RN
    To investigate the role of the tobacco industry in marketing to and sustaining tobacco addiction among older smokers and aging baby boomers, We performed archival searches of electronic archives of internal tobacco company documents using a snowball sampling approach. Analysis was done using iterative and comparative review of documents, classification by themes, and a hermeneutic interpretive approach to develop a case study. Based on extensive marketing research, tobacco companies aggressively targeted older smokers and sought to prevent them from quitting. Innovative marketing approaches were used. "Low tar" cigarettes were developed in response to the health concerns of older smokers, despite industry knowledge that such products had no health advantage and did not help smokers quit. Tobacco industry activities influence the context of cessation for older smokers in several ways. Through marketing "low tar" or "light" cigarettes to older smokers "at risk" of quitting, the industry contributes to the illusion that such cigarettes are safer, although "light" cigarettes may make it harder for addicted smokers to quit. Through targeted mailings of coupons and incentives, the industry discourages older smokers from quitting. Through rhetoric aimed at convincing addicted smokers that they alone are responsible for their smoking, the industry contributes to self-blame, a documented barrier to cessation. Educating practitioners, older smokers, and families about the tobacco industry's influence may decrease the tendency to "blame the victim," thereby enhancing the likelihood of older adults receiving tobacco addiction treatment. Comprehensive tobacco control measures must include a focus on older smokers. [source]


    Use of serial pig body weights for genetic evaluation of daily gain

    JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 2 2010
    B. Zumbach
    Summary This study examined the utility of serial weights from FIRE (Feed Intake Recording Equipment, Osborne Industries, Inc., Osborne, KS, USA) stations for an analysis of daily gain. Data included 884 132 body weight records from 3888 purebred Duroc pigs. Pigs entered the feeder station at age 77,149 days and left at age 95,184 days. A substantial number of records were abnormal, showing body weight close to 0 or up to twice the average weight. Plots of body weights for some animals indicated two parallel growth curves. Initial editing used a robust regression, which was a two-step procedure. In the first step, a quadratic growth curve was estimated assuming small or 0 weights for points far away from the curve; the process is iterative. In the second step, weights more than 1.5 SD from the estimated growth curve were treated as outliers. The retained body weight records (607 597) were averaged to create average daily weight (170 443) and then used to calculate daily gains (152 636). Additional editing steps included retaining only animals with ,50 body weight records and SD of the daily gain ,2 kg, followed by removing records outside 3 SD from the mean for given age, across all the animals , the resulting data set included 69 068 records of daily gain from 1921 animals. Daily gain based on daily, weekly and bi-weekly intervals was analysed using repeatability models. Heritability estimates were 0.04, 6 and 9%, respectively. The last two estimates correspond to heritability of 28% for a 12 week interval. For daily gain averaged weekly, the estimate of heritability obtained with a random regression model varied from 0.07 to 0.10. After extensive editing, body weight records from automatic feeding stations are useful for genetic analyses of daily gain from weekly or bi-weekly but not daily intervals. [source]


    Systems biology approaches for toxicology,

    JOURNAL OF APPLIED TOXICOLOGY, Issue 3 2007
    William Slikker Jr
    Abstract Systems biology/toxicology involves the iterative and integrative study of perturbations by chemicals and other stressors of gene and protein expression that are linked firmly to toxicological outcome. In this review, the value of systems biology to enhance the understanding of complex biological processes such as neurodegeneration in the developing brain is explored. Exposure of the developing mammal to NMDA (N -methyl- d -aspartate) receptor antagonists perturbs the endogenous NMDA receptor system and results in enhanced neuronal cell death. It is proposed that continuous blockade of NMDA receptors in the developing brain by NMDA antagonists such as ketamine (a dissociative anesthetic) causes a compensatory up-regulation of NMDA receptors, which makes the neurons bearing these receptors subsequently more vulnerable (e.g. after ketamine washout), to the excitotoxic effects of endogenous glutamate: the up-regulation of NMDA receptors allows for the accumulation of toxic levels of intracellular Ca2+ under normal physiological conditions. Systems biology, as applied to toxicology, provides a framework in which information can be arranged in the form of a biological model. In our ketamine model, for example, blockade of NMDA receptor up-regulation by the co-administration of antisense oligonucleotides that specifically target NMDA receptor NR1 subunit mRNA, dramatically diminishes ketamine-induced cell death. Preliminary gene expression data support the role of apoptosis as a mode of action of ketamine-induced neurotoxicity. In addition, ketamine-induced cell death is also prevented by the inhibition of NF- ,B translocation into the nucleus. This process is known to respond to changes in the redox state of the cytoplasm and has been shown to respond to NMDA-induced cellular stress. Although comprehensive gene expression/proteomic studies and mathematical modeling remain to be carried out, biological models have been established in an iterative manner to allow for the confirmation of biological pathways underlying NMDA antagonist-induced cell death in the developing nonhuman primate and rodent. Published in 2007 John Wiley & Sons, Ltd. [source]


    A Bayesian online inferential model for evaluation of analyzer performance

    JOURNAL OF CHEMOMETRICS, Issue 2 2005
    A. J. Willis
    Abstract An iterative Bayesian approach is developed for the inversion of flow instrumentation condition-monitoring problems. For the case of Gaussian random variables the solution reduces to an iterative weighted least squares approach amenable to online implementation, with a weighting derived from the Bayesian prior. The algorithm is illustrated with reference to a Sulfreen unit in a refinery, where concentrations of H2S and SO2 are measured by a number of input analyzers in parallel, prior to their combination and reaction. This paper discusses approaches to evaluating the performance of each instrument separately by monitoring the inferred bias using output data from the process. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Comparative analysis of the conformational profile of substance P using simulated annealing and molecular dynamics

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 16 2004
    Francesc J. Corcho
    Abstract The present study describes an extensive conformational search of substance P using two different computational methods. On the one hand, the peptide was studied using the iterative simulated annealing, and on the other, molecular dynamics simulations at 300 and 400 K. With the former method, the peptide was studied in vacuo with a dielectric constant of 80, whereas using the latter study the peptide was studied in a box of TIP3P water molecules. Analysis of the results obtained using both methodologies was carried out using an in-house methodology using a cluster analysis method based on information theory. Comparison of the two sampling methodologies and the different environment used in the calculations is also analyzed. Finally, the conformational motifs that are characteristic of substance P in a hydrophilic environment are presented and compared with the experimental results available in the literature. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1937,1952, 2004 [source]


    Integrated Management of Physician-delivered Alcohol Care for Tuberculosis Patients: Design and Implementation

    ALCOHOLISM, Issue 2 2010
    Shelly F. Greenfield
    Background:, While the integration of alcohol screening, treatment, and referral in primary care and other medical settings in the U.S. and worldwide has been recognized as a key health care priority, it is not routinely done. In spite of the high co-occurrence and excess mortality associated with alcohol use disorders (AUDs) among individuals with tuberculosis (TB), there are no studies evaluating effectiveness of integrating alcohol care into routine treatment for this disorder. Methods:, We designed and implemented a randomized controlled trial (RCT) to determine the effectiveness of integrating pharmacotherapy and behavioral treatments for AUDs into routine medical care for TB in the Tomsk Oblast Tuberculosis Service (TOTBS) in Tomsk, Russia. Eligible patients are diagnosed with alcohol abuse or dependence, are newly diagnosed with TB, and initiating treatment in the TOTBS with Directly Observed Therapy-Short Course (DOTS) for TB. Utilizing a factorial design, the Integrated Management of Physician-delivered Alcohol Care for Tuberculosis Patients (IMPACT) study randomizes eligible patients who sign informed consent into 1 of 4 study arms: (1) Oral Naltrexone + Brief Behavioral Compliance Enhancement Therapy (BBCET) + treatment as usual (TAU), (2) Brief Counseling Intervention (BCI) + TAU, (3) Naltrexone + BBCET + BCI + TAU, or (4) TAU alone. Results:, Utilizing an iterative, collaborative approach, a multi-disciplinary U.S. and Russian team has implemented a model of alcohol management that is culturally appropriate to the patient and TB physician community in Russia. Implementation to date has achieved the integration of routine alcohol screening into TB care in Tomsk; an ethnographic assessment of knowledge, attitudes, and practices of AUD management among TB physicians in Tomsk; translation and cultural adaptation of the BCI to Russia and the TB setting; and training and certification of TB physicians to deliver oral naltrexone and brief counseling interventions for alcohol abuse and dependence as part of routine TB care. The study is successfully enrolling eligible subjects in the RCT to evaluate the relationship of integrating effective pharmacotherapy and brief behavioral intervention on TB and alcohol outcomes, as well as reduction in HIV risk behaviors. Conclusions:, The IMPACT study utilizes an innovative approach to adapt 2 effective therapies for treatment of alcohol use disorders to the TB clinical services setting in the Tomsk Oblast, Siberia, Russia, and to train TB physicians to deliver state of the art alcohol pharmacotherapy and behavioral treatments as an integrated part of routine TB care. The proposed treatment strategy could be applied elsewhere in Russia and in other settings where TB control is jeopardized by AUDs. If demonstrated to be effective, this model of integrating alcohol interventions into routine TB care has the potential for expanded applicability to other chronic co-occurring infectious and other medical conditions seen in medical care settings. [source]


    Toward Deterministic Material Removal and Surface Figure During Fused Silica Pad Polishing

    JOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 5 2010
    Tayyab I. Suratwala
    The material removal and surface figure after ceria pad polishing of fused silica glass have been measured and analyzed as a function of kinematics, loading conditions, and polishing time. Also, the friction at the workpiece/lap interface, the slope of the workpiece relative to the lap plane, and lap viscoelastic properties have been measured and correlated to material removal. The results show that the relative velocity between the workpiece and the lap (i.e., the kinematics) and the pressure distribution determine the spatial and temporal material removal, and hence the final surface figure of the workpiece. In cases where the applied loading and relative velocity distribution over the workpiece are spatially uniform, a significant nonuniformity in material removal, and thus surface figure, is observed. This is due to a nonuniform pressure distribution resulting from: (1) a moment caused by a pivot point and interface friction forces; (2) viscoelastic relaxation of the polyurethane lap; and (3) a physical workpiece/lap interface mismatch. Both the kinematics and these nonuniformities in the pressure distribution are quantitatively described, and have been combined to develop a spatial and temporal model, based on Preston's equation, called Surface Figure or SurF. The surface figure simulations are consistent with the experiment for a wide variety of polishing conditions. This study is an important step toward deterministic full-aperture polishing, allowing optical glass fabrication to be performed in a more repeatable, less iterative, and hence more economical manner. [source]


    Automatic identification of seasonal transfer function models by means of iterative stepwise and genetic algorithms

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2008
    Monica Chiogna
    Abstract., In this article, we introduce an automatic identification procedure for transfer function models. These models are commonplace in time-series analysis, but their identification can be complex. To tackle this problem, we propose to couple a nonlinear conditional least-squares algorithm with a genetic search over the model space. We illustrate the performances of our proposal by examples on simulated and real data. [source]


    An efficient gridding reconstruction method for multishot non-Cartesian imaging with correction of off-resonance artifacts

    MAGNETIC RESONANCE IN MEDICINE, Issue 6 2010
    Yuguang Meng
    Abstract An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. Magn Reson Med, 2010. © 2010 Wiley-Liss, Inc. [source]