Approach Leads (approach + lead)

Distribution by Scientific Domains


Selected Abstracts


Toxicity assessment of mono-substituted benzenes and phenols using a Pseudomonas initial oxygen uptake assay

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2005
Ded-Shih Huang
Abstract A methodology is presented for assessing the toxicity of chemical substances through their inhibitory action toward the Pseudomonas initial oxygen uptake (PIOU) rate. The current studies reveal that the PIOU assay is rapid, cost-efficient, and easy to perform. The oxygen uptake rate was found to be associated with a putative benzoate transporter and highly dependent on benzoate concentration. The putative benzoate transporter has been shown to follow Michaelis,Menten kinetics. Most phenols were found to be noncompetitive inhibitors of the benzoate transporter. The inhibition constant (Ki) of these noncompetitive inhibitors can be related to the concentration causing 50% oxygen uptake inhibition in Pseudomonas putida. Modeling these data by using the response,surface approach leads to the development of a quantitative structure,activity relationship (QSAR) for the toxicity of phenols ((1/Ki) = ,0.435 (±0.038) lowest-unoccupied-molecular orbital + 0.517 (±0.027)log KOW ,2.340 (±0.068), n = 49, r2 = 0.930, s = 0.107, r2adj = 0.926, F = 303.1). A comparison of QSAR models derived from the Ki data of the PIOU method and the toxicity data of 40-h Tetrahymena pyrifomis growth inhibition assay (Tetratox) indicated that there was a high correlation between the two approaches (r2 = 0.925). [source]


Iterative channel estimation and data detection in frequency-selective fading MIMO channels,

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2004
Maja Lon
Signals transmitted through multiple-input multiple-output (MIMO) wireless channels suffer from multiple-access interference (MAI), multipath propagation and additive noise. Iterative multiuser receiver algorithms mitigate these signal impairments, while offering a good tradeoff between performance and complexity. The receiver presented in this paper performs channel estimation, multiuser detection and decoding in an iterative manner. The estimation of the frequency selective, block-fading channel is initiated with the pilot symbols. In subsequent iterations, soft decisions of all the data symbols are used in an appropriate way to improve the channel estimates. This approach leads to significant improvement of the overall receiver performance, compared to other schemes. The bit-error-rate (BER) performance of the receiver is evaluated by simulations for different parameter setups. Copyright © 2004 AEI. [source]


Incorporating covariates in mapping heterogeneous traits: a hierarchical model using empirical Bayes estimation

GENETIC EPIDEMIOLOGY, Issue 7 2007
Swati Biswas
Abstract Complex genetic traits are inherently heterogeneous, i.e., they may be caused by different genes, or non-genetic factors, in different individuals. So, for mapping genes responsible for these diseases using linkage analysis, heterogeneity must be accounted for in the model. Heterogeneity across different families can be modeled using a mixture distribution by letting each family have its own heterogeneity parameter denoting the probability that its disease-causing gene is linked to the marker map under consideration. A substantial gain in power is expected if covariates that can discriminate between the families of linked and unlinked types are incorporated in this modeling framework. To this end, we propose a hierarchical Bayesian model, in which the families are grouped according to various (categorized) levels of covariate(s). The heterogeneity parameters of families within each group are assigned a common prior, whose parameters are further assigned hyper-priors. The hyper-parameters are obtained by utilizing the empirical Bayes estimates. We also address related issues such as evaluating whether the covariate(s) under consideration are informative and grouping of families. We compare the proposed approach with one that does not utilize covariates and show that our approach leads to considerable gains in power to detect linkage and in precision of interval estimates through various simulation scenarios. An application to the asthma datasets of Genetic Analysis Workshop 12 also illustrates this gain in a real data analysis. Additionally, we compare the performances of microsatellite markers and single nucleotide polymorphisms for our approach and find that the latter clearly outperforms the former. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


Traveltime computation by wavefront-orientated ray tracing

GEOPHYSICAL PROSPECTING, Issue 1 2005
Radu Coman
ABSTRACT For multivalued traveltime computation on dense grids, we propose a wavefront-orientated ray-tracing (WRT) technique. At the source, we start with a few rays which are propagated stepwise through a smooth two-dimensional (2D) velocity model. The ray field is examined at wavefronts and a new ray might be inserted between two adjacent rays if one of the following criteria is satisfied: (1) the distance between the two rays is larger than a predefined threshold; (2) the difference in wavefront curvature between the rays is larger than a predefined threshold; (3) the adjacent rays intersect. The last two criteria may lead to oversampling by rays in caustic regions. To avoid this oversampling, we do not insert a ray if the distance between adjacent rays is smaller than a predefined threshold. We insert the new ray by tracing it from the source. This approach leads to an improved accuracy compared with the insertion of a new ray by interpolation, which is the method usually applied in wavefront construction. The traveltimes computed along the rays are used for the estimation of traveltimes on a rectangular grid. This estimation is carried out within a region bounded by adjacent wavefronts and rays. As for the insertion criterion, we consider the wavefront curvature and extrapolate the traveltimes, up to the second order, from the intersection points between rays and wavefronts to a gridpoint. The extrapolated values are weighted with respect to the distances to wavefronts and rays. Because dynamic ray tracing is not applied, we approximate the wavefront curvature at a given point using the slowness vector at this point and an adjacent point on the same wavefront. The efficiency of the WRT technique is strongly dependent on the input parameters which control the wavefront and ray densities. On the basis of traveltimes computed in a smoothed Marmousi model, we analyse these dependences and suggest some rules for a correct choice of input parameters. With suitable input parameters, the WRT technique allows an accurate traveltime computation using a small number of rays and wavefronts. [source]


Quasi optimal finite difference method for Helmholtz problem on unstructured grids

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2010
Daniel T. Fernandes
Abstract A quasi optimal finite difference method (QOFD) is proposed for the Helmholtz problem. The stencils' coefficients are obtained numerically by minimizing a least-squares functional of the local truncation error for plane wave solutions in any direction. In one dimension this approach leads to a nodally exact scheme, with no truncation error, for uniform or non-uniform meshes. In two dimensions, when applied to a uniform cartesian grid, a 9-point sixth-order scheme is derived with the same truncation error of the quasi-stabilized finite element method (QSFEM) introduced by Babu,ka et al. (Comp. Meth. Appl. Mech. Eng. 1995; 128:325,359). Similarly, a 27-point sixth-order stencil is derived in three dimensions. The QOFD formulation, proposed here, is naturally applied on uniform, non-uniform and unstructured meshes in any dimension. Numerical results are presented showing optimal rates of convergence and reduced pollution effects for large values of the wave number. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Lexis that rings a bell: on the influence of auditory support in vocabulary acquisition

INTERNATIONAL JOURNAL OF APPLIED LINGUISTICS, Issue 2 2010
Andreas Bürki
This empirical study investigated the effectiveness of auditory support in vocabulary learning by comparing acquisition and retention of lexical items studied using a traditional paired-associates memorisation technique to results achieved using an audio-supported paired-associates technique. The subjects were 88 Korean university students. Results indicated that the audio-supported vocabulary learning approach leads to significantly higher rates of acquisition. This advantage was retained over the two months following treatment. The largest difference was noticed in pronunciation. Furthermore, it was found that success in the audio-supported approach was not significantly dependent on learning preferences, and that the approach enjoyed a higher level of acceptance among subjects than non-auditory paired-associates memorisation. [source]


Multilayer Nanocomplexes of Polymer and DNA Exhibit Enhanced Gene Delivery,

ADVANCED MATERIALS, Issue 1 2008
M. Saul
Polymeric-DNA complexes (polyplexes) are constructed with multiple layers of counter-polyions as DNA/polyethylenimine/poly(acrylic acid)/polyethylenimine. The increased association of polyethylenimine achieved by the multilayer approach leads to substantial increases in expression of transgene for reporter plasmids without the need for excess free polymer typically required for non-viral gene delivery. This method of polyplex preparation provides the opportunity to improve transgene expression for gene therapy approaches to disease treatment. [source]


Clustering-based scheduling: A new class of scheduling algorithms for single-hop lightwave networks

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2008
Sophia G. Petridou
Abstract In wavelength division multiplexing (WDM) star networks, the construction of the transmission schedule is a key issue, which essentially affects the network performance. Up to now, classic scheduling techniques consider the nodes' requests in a sequential service order. However, these approaches are static and do not take into account the individual traffic pattern of each node. Owing to this major drawback, they suffer from low performance, especially when operating under asymmetric traffic. In this paper, a new class of scheduling algorithms for WDM star networks, which is based on the use of clustering techniques, is introduced. According to the proposed Clustering-Based Scheduling Algorithm (CBSA), the network's nodes are organized into clusters, based on the number of their requests per channel. Then, their transmission priority is defined beginning from the nodes belonging to clusters with higher demands and ending to the nodes of clusters with fewer requests. The main objective of the proposed scheme is to minimize the length of the schedule by rearranging the nodes' service order. Furthermore, the proposed CBSA scheme adopts a prediction mechanism to minimize the computational complexity of the scheduling algorithm. Extensive simulation results are presented, which clearly indicate that the proposed approach leads to a significantly higher throughput-delay performance when compared with conventional scheduling algorithms. We believe that the proposed clustering-based approach can be the base of a new generation of high-performance scheduling algorithms for WDM star networks. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Synthesis of Sulfoximines and Sulfilimines with Aryl and Pyrazolylmethyl Substituents

ADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 2-3 2010
Olga García Mancheño
Abstract Sulfoximines bearing pyrazolylmethyl and aryl substituents, which are relevant to the crop protection industry, and their corresponding sulfilimine intermediates, have been prepared from sulfide precursors by either iron-catalyzed nitrogen transfer reactions or metal-free imination procedures. Whereas the former approach leads to N -nosyl-substituted products, the latter affords N -cyano derivatives. [source]


Repeatability of Dietary Patterns Derived Using ,-Priori and ,-Posterior Methods

JOURNAL OF APPLIED BIOBEHAVIORAL RESEARCH, Issue 1 2010
Vassiliki Bountziouka
We aimed to examine the repeatability of dietary patterns derived using ,-priori and ,-posterior techniques. During 2008, 500 participants were enrolled and asked to fill-in a food frequency questionnaire twice. The Mediterranean dietary pattern was ,-priori assessed by the MedDietScore, while principal components analysis (PCA) and cluster analysis (CA) were used as the ,-posterior techniques. Results revealed that the overall MedDietScore was similar between the two recordings (M = 28, SD = 3.7 vs. M = 28, SD = 3.8). Although PCA revealed 13 patterns in each record (60% of explained variability), the food items characterizing each pattern were varying through the two recordings. According to CA, three clusters were revealed from both records. The ,-posterior methods should be used with caution, while the ,-priori approach leads to more robust results. [source]


Modelling near-infrared signals for on-line monitoring in cheese manufacture

JOURNAL OF CHEMOMETRICS, Issue 2 2002
B. J. A. Mertens
Abstract This paper considers the analysis of a continuously monitored near-infrared reflectance signal at a single wavelength for the calibration of a process parameter in an application to food engineering for quality control. We describe how we may summarize the information in the observed signals by postulating an explicit statistical model on the signal. An exploratory data analysis may then be carried out on the profile summaries to evaluate whether and how the functional data provide information on the parameter which we would like to calibrate. From a conceptual point of view, such an approach is not dissimilar to principal component regression methods which use an intermediate decomposition through which information is summarised for calibration of a response. The paper demonstrates how the approach leads to important insights into the manner in which the functional data provide the required information on the desired outcome variable, in the context of the practical application in quality control which is discussed and by using a designed experiment. Calculations are implemented through the Gibbs sampler. Calibration of the prediction equation takes place through meta-analysis of the summarized profile data in order to take the uncertainty inherent in the summaries into account. Copyright © 2002 John Wiley & Sons, Ltd. [source]


A Cooperative Game Theory of Noncontiguous Allies

JOURNAL OF PUBLIC ECONOMIC THEORY, Issue 4 2001
Daniel G. Arce M.
This paper develops a cooperative game-theoretic representation of alliances with noncontiguous members that is based on cost savings from reducing overlapping responsibilities and sequestering borders. For various scenarios, three solutions (the Shapley value, nucleolus, and core's centroid) are found and compared. Even though their underlying ethical norm varies, the solutions are often identical for cases involving contiguous allies and for rectangular arrays of noncontiguous allies. When transaction costs and/or alternative spatial configurations are investigated, they may then differ. In all cases the cooperative approach leads to a distribution of alliance costs that need not necessarily coincide with the traditional emphasis on gross domestic product size as a proxy for deterrence value (the exploitation hypothesis). Instead, burdens can now be defined based upon a country's spatial and strategic location within the alliance. [source]


Acquisition of Literacy in Bilingual Children: A Framework for Research

LANGUAGE LEARNING, Issue 2007
Ellen Bialystok
Much of the research that contributes to understanding how bilingual children become literate is not able to isolate the contribution of bilingualism to the discussion of literacy acquisition for these children. This article begins by identifying three areas of research that are relevant to examining literacy acquisition in bilinguals, explaining the contribution of each, and associating each type of research with a skill required by monolingual children in becoming literate. Three prerequisite skills for the acquisition of literacy are competence with the oral language, understanding of symbolic concepts of print, and establishment of metalinguistic awareness. A review of the literature explores the extent to which these skills that influence literacy acquisition in monolinguals develop differently for bilingual children. The conclusion is that the relation between bilingualism and the development of each of the three skills is different, sometimes indicating an advantage (concepts of print), sometimes a disadvantage (oral language competence), and sometimes little difference (metalinguistic concepts) for bilingual children. Therefore, bilingualism is clearly a factor in children's development of literacy, but the effect of that factor is neither simple nor unitary. Since the publication of this article, our research has continued to explore the themes set out in this framework and provided more detail for the description of how bilingualism affects the acquisition of literacy. Two important advances in this research are the finding that some aspects of reading ability, notably phonological awareness, are rooted in general cognitive mechanisms and transfer easily across languages, whereas others, such as decoding, are more language dependent and language-specific and need to be relearned with each new writing system (Bialystok, Luk, & Kwan, 2005). Second, writing systems and the differences between them have a greater impact on children's acquisition of literacy than previously believed. Not surprisingly, this relation has been found for emerging ability with phonological awareness (Bialystok, McBride-Chang, & Luk, 2005) but, more surprisingly, has recently been shown to have a subtle influence on children's emerging concepts of print (Bialystok & Luk, in press). The interpretation that bilingualism must be considered in terms of both advantages and disadvantages has also been pursued in studies of cognitive and linguistic processing in adults. Recent research has shown that adult bilinguals display disadvantages on tasks measuring lexical retrieval and fluency (Michael & Gollan, 2005) but advantages on tasks assessing cognitive control of attention (Bialystok, Craik, Klein, & Viswanathan, 2004). This approach leads to a more detailed and, ultimately, more accurate description of how bilingualism affects cognitive performance. [source]


Asymptotic analysis of solutions of a radial Schrödinger equation with oscillating potential

MATHEMATISCHE NACHRICHTEN, Issue 15 2006
Sigrun Bodine
Abstract We are interested in the asymptotic behavior of solutions of a Schrödinger-type equation with oscillating potential which was studied by A. Its. Here we use a different technique, based on Levinson's Fundamental Lemma, to analyze the asymptotic behavior, and our approach leads to a complete asymptotic representation of the solutions. We also discuss formal simplifications for differential equations with what might be called "regular/irregular singular points with periodic coefficients". (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Polaron signatures in the line shape of semiconductor ;intersubband transitions: quantum kinetics of the electron,phonon interaction

PHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 11 2004
S. Butscher
Abstract We present a theory of the optical line shape of coherent intersubband transitions in a semiconductor quantum well, considering non-Markovian LO-phonon scattering as major broadening mechanism. We show that a quantum kinetic approach leads to additional polaron resonances and a resonance enhancement for gap energies close to the phonon energy. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Mixed matrix membrane materials with glassy polymers.

POLYMER ENGINEERING & SCIENCE, Issue 7 2002
Part
Analysis presented in Part 1 of this paper indicated the importance of optimization of the transport properties of the interfacial region to achieve ideal mixed matrix materials. This insight is used in this paper to guide mixed matrix material formation with more conventional gas separation polymers. Conventional gas separation materials are rigid, and, as seen earlier, lead to the formation of an undesirable interphase under conventional casting techniques. We show in this study that if flexibility can be maintained during membrane formation with a polymer that interacts favorably with the sieve, successful mixed matrix materials result, even with rigid polymeric materials. Flexibility during membrane formation can be achieved by formation of films at temperatures close to the glass transition temperature of the polymer. Moreover, combination of chemical coupling and flexibility during membrane formation produces even more significant improvements in membrane performance. This approach leads to the formation of mixed matrix material with transport properties exceeding the upper bound currently achieved by conventional membrane materials. Another approach to form successful mixed matrix materials involves tailoring the interface by use of integral chemical linkages that are intrinsically part of the chain backbone. Such linkages appear to tighten the interface sufficiently to prevent "nonselective leakage" along the interface. This approach is demonstrated by directly bonding a reactive polymer onto the sieve surface under proper processing conditions. [source]


Energy consistent time integration of planar multibody systems

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2006
Stefan Uhlar
The planar motion of rigid bodies and multibody systems can be easily described by coordinates belonging to a linear vector space. This is due to the fact that in the planar case finite rotations commute. Accordingly, using this type of generalized coordinates can be considered as canonical description of planar multibody systems. However, the extension to the three-dimensional case is not straightforward. In contrast to that, employing the elements of the direction cosine matrix as redundant coordinates makes possible a straightforward treatment of both planar and three-dimensional multibody systems. This alternative approach leads in general to differential-algebraic equations (DAEs) governing the dynamics of rigid body systems. The main purpose of the present paper is to present a comparison of the two alternative descriptions. In both cases energy-consistent time integration schemes are applied. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Economies of Scale and Scope, Contestability, Windfall Profits and Regulatory Risk

THE MANCHESTER SCHOOL, Issue 6 2000
Michael J. Ryan
In this paper I introduce new results on economies of scale and scope and develop implications of these results for contestability and regulation. This is done using a goal programming approach which endogenizes regulatory frameworks in a multiperiod and multiregion monopolistic and oligopolistic analysis. This explicitly spatial approach leads to useful distinctions between industrial contestability and market contestability and a multiperiod contestability-based regulatory model. That model is then extended to a state preference framework with regulatory risk and windfall gains and losses. [source]


Kinematics, Dynamics, Biomechanics: Evolution of Autonomy in Game Animation

COMPUTER GRAPHICS FORUM, Issue 3 2005
Steve Collins
The believeable portrayal of character performances is critical in engaging the immersed player in interactive entertainment. The story, the emotion and the relationship between the player and the world they are interacting within are hugely dependent on how appropriately the world's characters look, move and behave. We're concerned here with the character's motion; with next generation game consoles like Xbox360TM and Playstation®3 the graphical representation of characters will take a major step forward which places even more emphasis on the motion of the character. The behavior of the character is driven by story and design which are adapted to game context by the game's AI system. The motion of the characters populating the game's world, however, is evolving to an interesting blend of kinematics, dynamics, biomechanics and AI drivenmotion planning. Our goal here is to present the technologies involved in creating what are essentially character automata, emotionless and largely brainless character shells that nevertheless exhibit enough "behavior" to move as directed while adapting to the environment through sensing and actuating responses. This abstracts the complexities of low level motion control, dynamics, collision detection etc. and allows the game's artificial intelligence system to direct these characters at a higher level. While much research has already been conducted in this area and some great results have been published, we will present the particular issues that face game developers working on current and next generation consoles, and how these technologies may be integrated into game production pipelines so to facilitate the creation of character performances in games. The challenges posed by the limited memory and CPU bandwidth (though this is changing somewhat with next generation) and the challenges of integrating these solutions with current game design approaches leads to some interesting problems, some of which the industry has solutions for and some others which still remain largely unsolved. [source]


From the air to beneath the soil , revealing and mapping great war trenches at Ploegsteert (Comines-Warneton), Belgium

ARCHAEOLOGICAL PROSPECTION, Issue 4 2009
P. Masters
Abstract Recent military battlefield sites are often recorded by accident during geophysical investigations researching into earlier archaeological landscapes. The First World War (Great War) perhaps left its traces like no other war before or since in Europe. For the first time, a large area, some 16,ha in extent, has been surveyed over a modern conflict landscape. The authors have attempted to combine two remote sensing techniques: analysis of contemporary Great War aerial photographs and geophysical prospection techniques. The combination of two different approaches leads to a more comprehensive understanding of the Great War battlefield and an understanding of the value of remote sensing in this new area of applied research. Copyright © 2009 John Wiley & Sons, Ltd. [source]