Distribution by Scientific Domains
Distribution within Life Sciences

Terms modified by Mathematical

  • mathematical ability
  • mathematical algorithm
  • mathematical analysis
  • mathematical approach
  • mathematical concept
  • mathematical description
  • mathematical development
  • mathematical economics
  • mathematical equation
  • mathematical expression
  • mathematical formalism
  • mathematical formula
  • mathematical formulation
  • mathematical formulations
  • mathematical foundation
  • mathematical framework
  • mathematical function
  • mathematical knowledge
  • mathematical method
  • mathematical methods
  • mathematical model
  • mathematical modeling
  • mathematical modelling
  • mathematical models
  • mathematical operations
  • mathematical point
  • mathematical procedure
  • mathematical processing
  • mathematical program
  • mathematical programming
  • mathematical programming approach
  • mathematical programming model
  • mathematical proof
  • mathematical property
  • mathematical relationship
  • mathematical representation
  • mathematical simulation
  • mathematical skill
  • mathematical solution
  • mathematical structure
  • mathematical study
  • mathematical techniques
  • mathematical term
  • mathematical theory
  • mathematical tool
  • mathematical treatment

  • Selected Abstracts

    Mathematical and experimental insights into the development of the enteric nervous system and Hirschsprung's Disease

    Kerry A. Landman
    The vertebrate enteric nervous system is formed by a rostro-caudally directed invasion of the embryonic gastrointestinal mesenchyme by neural crest cells. Failure to complete this invasion results in the distal intestine lacking intrinsic neurons. This potentially fatal condition is called Hirschsprung's Disease. A mathematical model of cell invasion incorporating cell motility and proliferation of neural crest cells to a carrying capacity predicted invasion outcomes to imagined manipulations, and these manipulations were tested experimentally. Mathematical and experimental results agreed. The results show that the directional invasion is chiefly driven by neural crest cell proliferation. Moreover, this proliferation occurs in a small region at the wavefront of the invading population. These results provide an understanding of why many genes implicated in Hirschsprung's Disease influence neural crest population size. In addition, during in vivo development the underlying gut tissues are growing simultaneously as the neural crest cell invasion proceeds. The interactions between proliferation, motility and gut growth dictate whether or not complete colonization is successful. Mathematical modeling provides insights into the conditions required for complete colonization or a Hirschsprung's-like deficiency. Experimental evidence supports the hypotheses suggested by the modeling. [source]

    On MILES based on flux-limiting algorithms,

    F. F. Grinstein
    Abstract Non-classical large eddy simulation (LES) approaches based on using the unfiltered flow equations instead of the filtered ones have been the subject of considerable interest during the last decade. In the monotonically integrated LES (MILES) approach, flux-limiting schemes are used to emulate the characteristic turbulent flow features in the high-wave number end of the inertial subrange region. Mathematical and physical aspects of implicit sub grid scale modelling using nonlinear flux-limiters are conveniently addressed using the modified LES-equation formalism. In this study, the performance of MILES is demonstrated as a function of the flux-limiting scheme in selected representative case studies. Published in 2005 by John Wiley & Sons, Ltd. [source]

    Mathematical and Computational Models for Congestion Pricing, edited by Siriphong Lawphongpanich, Donald W. Hearn, and Michael J. Smith

    Louie Nan Liu
    No abstract is available for this article. [source]

    Mathematical and biological models of blood coagulation.

    A rebuttal

    (631) Chronic Pain Treatment Meta-Analyses: A Mathematical and Qualitative Review

    PAIN MEDICINE, Issue 2 2000
    Article first published online: 25 DEC 200
    Authors: Fishbain DA, University of Miami Comprehensive Pain Center; Rosomoff H, University of Miami Comprehensive Pain Center; Cutler RB, University of Miami Comprehensive Pain Center; Steele-Rosomoff R, University of Miami Comprehensive Pain Center Aim of Investigation: To critically review chronic pain treatment meta-analyses according to defined criteria. Methods: An extensive literature search yielded 22 meta-analyses dealing with pain. The following inclusion criteria were applied to these studies: (1) nonsurgical pain treatment outcome only, including nerve blocks; (2) chronic pain treatment outcome only; (3) nonmalignant pain only and; (4) study data presenting an effect size which enabled the calculation of a confidence interval (CI). These inclusion criteria selected 16 studies from the original group. These remaining meta-analyses were then divided into 3 categories: (1) General pain facility treatment (n = 4); (2) Headache treatment (n = 5) and; (3) Specific treatment types, eg, manipulation, psychoeducational, antidepressant, etc. (n = 7). Within each meta-analysis the data was subdivided according to type of pain, treatment type and outcome variable. The CI was then calculated for each of these subdivisions within each meta-analysis. The quality of the 16 meta-analyses was also investigated according to 20 meta-analysis criteria previously presented in the literature. Results: (1) Overall, the pain facility treatment meta-analyses were remarkably consistent in demonstrating that pain facility treatment is effective for most treatment outcome variables. (2) Within pain facility treatments, biofeedback, cognitive therapy, operant conditioning, and package treatment were demonstrated to be efficacious. (3) Within the headache treatment meta-analyses, both relaxation/biofeedback and various medications were demonstrated to be efficacious. (4) Within the specific isolated treatments group, psychoeducation, antidepressants, capsaicin and spinal manipulation were found to have efficacy, for a number of treatment outcome variables. (5) The quality of the meta-analyses was variable but acceptable, according to the meta-analysis criteria utilized. Conclusions: Overall the results of the reviewed meta-analyses indicate that most treatments are effective for most pain patients but that some treatments appear to be more effective than others. [source]

    Advances in Mathematical and Statistical Modeling edited by ARNOLD, B. C., BALAKRISHNAN, N., SARABIA, J. M. and MÍNGUEZ, R.

    BIOMETRICS, Issue 2 2009
    Article first published online: 28 MAY 200
    No abstract is available for this article. [source]

    Reconstructing Evolution: New Mathematical and Computational Advances edited by Gascuel, O. and Steel, M.

    BIOMETRICS, Issue 2 2008
    Emmanuel Paradis
    No abstract is available for this article. [source]

    Comparison of LiDAR waveform processing methods for very shallow water bathymetry using Raman, near-infrared and green signals

    Tristan Allouis
    Abstract Airborne light detection and ranging (LiDAR) bathymetry appears to be a useful technology for bed topography mapping of non-navigable areas, offering high data density and a high acquisition rate. However, few studies have focused on continental waters, in particular, on very shallow waters (<2,m) where it is difficult to extract the surface and bottom positions that are typically mixed in the green LiDAR signal. This paper proposes two new processing methods for depth extraction based on the use of different LiDAR signals [green, near-infrared (NIR), Raman] of the SHOALS-1000T sensor. They have been tested on a very shallow coastal area (Golfe du Morbihan, France) as an analogy to very shallow rivers. The first method is based on a combination of mathematical and heuristic methods using the green and the NIR LiDAR signals to cross validate the information delivered by each signal. The second method extracts water depths from the Raman signal using statistical methods such as principal components analysis (PCA) and classification and regression tree (CART) analysis. The obtained results are then compared to the reference depths, and the performances of the different methods, as well as their advantages/disadvantages are evaluated. The green/NIR method supplies 42% more points compared to the operator process, with an equivalent mean error (,4·2,cm verusu ,4·5,cm) and a smaller standard deviation (25·3,cm verusu 33·5,cm). The Raman processing method provides very scattered results (standard deviation of 40·3,cm) with the lowest mean error (,3·1,cm) and 40% more points. The minimum detectable depth is also improved by the two presented methods, being around 1,m for the green/NIR approach and 0·5,m for the statistical approach, compared to 1·5,m for the data processed by the operator. Despite its ability to measure other parameters like water temperature, the Raman method needed a large amount of reference data to provide reliable depth measurements, as opposed to the green/NIR method. Copyright © 2010 John Wiley & Sons, Ltd. [source]

    Scale-dependence in species-area relationships

    ECOGRAPHY, Issue 6 2005
    Will R. Turner
    Species-area relationships (SARs) are among the most studied phenomena in ecology, and are important both to our basic understanding of biodiversity and to improving our ability to conserve it. But despite many advances to date, our knowledge of how various factors contribute to SARs is limited, searches for single causal factors are often inconclusive, and true predictive power remains elusive. We believe that progress in these areas has been impeded by 1) an emphasis on single-factor approaches and thinking of factors underlying SARs as mutually exclusive hypotheses rather than potentially interacting processes, and 2) failure to place SAR-generating factors in a scale-dependent framework. We here review mathematical, ecological, and evolutionary factors contributing to species-area relationships, synthesizing major hypotheses from the literature in a scale-dependent context. We then highlight new research directions and unanswered questions raised by this scale-dependent synthesis. [source]

    Habitat size and number in multi-habitat landscapes: a model approach based on species-area curves

    ECOGRAPHY, Issue 1 2002
    Even Tjørve
    This paper discusses species diversity in simple multi-habitat environments. Its main purpose is to present simple mathematical and graphical models on how landscape patterns affect species numbers. The idea is to build models of species diversity in multi-habitat landscapes by combining species-area curves for different habitats. Predictions are made about how variables such as species richness and species overlap between habitats influence the proportion of the total landscape each habitat should constitute, and how many habitats it should be divided into in order to be able to sustain the maximal number of species. Habitat size and numbers are the only factors discussed here, not habitat spatial patterns. Among the predictions are: 1) where there are differences in species diversity between habitats, optimal landscape patterns contain larger proportions of species rich habitats. 2) Species overlap between habitats shifts the optimum further towards larger proportions of species rich habitat types. 3) Species overlap also shifts the optimum towards fewer habitat types. 4) Species diversity in landscapes with large species overlap is more resistant to changes in landscape (or reserve) size. This type of model approach can produce theories useful to nature and landscape management in general, and the design of nature reserves and national parks in particular. [source]

    Food web complexity and chaotic population dynamics

    ECOLOGY LETTERS, Issue 3 2002
    Gregor F. Fussmann
    Abstract In mathematical models, very simple communities consisting of three or more species frequently display chaotic dynamics which implies that long-term predictions of the population trajectories in time are impossible. Communities in the wild tend to be more complex, but evidence for chaotic dynamics from such communities is scarce. We used supercomputing power to test the hypothesis that chaotic dynamics become less frequent in model ecosystems when their complexity increases. We determined the dynamical stability of a universe of mathematical, nonlinear food web models with varying degrees of organizational complexity. We found that the frequency of unpredictable, chaotic dynamics increases with the number of trophic levels in a food web but decreases with the degree of complexity. Our results suggest that natural food webs possess architectural properties that may intrinsically lower the likelihood of chaotic community dynamics. [source]

    Gender differences in self-estimated intelligence and their relation to gender-role orientation

    Beatrice Rammstedt
    Previous research has demonstrated that gender differences in self-estimated intelligence are domain specific: Males estimate their mathematical, logical and spatial abilities significantly higher than females. It has been frequently hypothesized that these differences are moderated by the individual's degree of gender-role orientation. However, studies investigating the effect of gender-role orientation on self-estimated intelligence revealed highly inconsistent results. In the present study, 267 participants estimated their own abilities in 11 intelligence domains and completed the Bem Sex Role Inventory (BSRI). Factor analysis of the 11 intelligence domains yielded four interpretable factors. Gender differences were identified for the mathematical,logical and the artistic intelligence factor. Additional analyses revealed a moderating effect of gender-role orientation on gender differences in factor scores. Thus, the present study provided direct evidence for the notion that in male, but not in female individuals, self-estimates of specific aspects of intelligence are markedly influenced by gender-role orientation. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Application of stereology to dermatological research

    Søren Kamp
    Abstract:, Stereology is a set of mathematical and statistical tools to estimate three-dimensional (3-D) characteristics of objects from regular two-dimensional (2-D) sections. In medicine and biology, it can be used to estimate features such as cell volume, cell membrane surface area, total length of blood vessels per volume tissue and total number of cells. The unbiased quantification of these 3-D features allows for a better understanding of morphology in vivo compared with 2-D methods. This review provides an introduction to the field of stereology with specific emphasis on the application of stereology to dermatological research by supplying a short insight into the theoretical basis behind the technique and presenting previous dermatological studies in which stereology was an integral part. Both the theory supporting stereology and a practical approach in a dermatological setting are reviewed with the aim to provide the reader with the capability to better assess papers employing stereological estimators and to design stereological studies independently. [source]

    The Hill equation: a review of its capabilities in pharmacological modelling

    Sylvain Goutelle
    Abstract The Hill equation was first introduced by A.V. Hill to describe the equilibrium relationship between oxygen tension and the saturation of haemoglobin. In pharmacology, the Hill equation has been extensively used to analyse quantitative drug,receptor relationships. Many pharmacokinetic,pharmacodynamic models have used the Hill equation to describe nonlinear drug dose,response relationships. Although the Hill equation is widely used, its many properties are not all well known. This article aims at reviewing the various properties of the Hill equation. The descriptive aspects of the Hill equation, in particular mathematical and graphical properties, are examined, and related to Hill's original work. The mechanistic aspect of the Hill equation, involving a strong connection with the Guldberg and Waage law of mass action, is also described. Finally, a probabilistic view of the Hill equation is examined. Here, we provide some new calculation results, such as Fisher information and Shannon entropy, and we introduce multivariate probabilistic Hill equations. The main features and potential applications of this probabilistic approach are also discussed. Thus, within the same formalism, the Hill equation has many different properties which can be of great interest for those interested in mathematical modelling in pharmacology and biosciences. [source]

    Effects of operating conditions on infiltration of molten aluminum and heat transfer in a centrifugal force field

    Qinwei Tian
    Abstract This paper presents the results of an analysis aimed at determining the influence of changing operating conditions in the centrifugal infiltration casting. It considers the effect of centrifugal force on infiltration and heat transfer. The molten aluminum flow with heat transfer though SiC porous media in a centrifugal force field is described using a mathematical and physical model by employing the local thermal nonequilibrium between the solid and fluid phases. The calculation results show that the temperature difference between molten aluminum and SiC porous media in the infiltrated region decreases with the contact time. There are two distinctly noticeable stages of infiltration velocity: the onset stage of infiltration, which drops down sharply, and the following stage of smooth velocity. The operating conditions have important effects on the infiltration velocity and temperature patterns of fluid and solid. A suitable rotational speed and SiC volume fraction should be chosen to ensure the flow of molten metal in the porous preform and diminish the temperature difference between fluid and solid. © 2003 Wiley Periodicals, Inc. Heat Trans Asian Res, 32(6): 501,510, 2003; Published online in Wiley InterScience ( DOI 10.1002/htj.10114 [source]

    An upscaling method and a numerical analysis of swelling/shrinking processes in a compacted bentonite/sand mixture

    M. Xie
    Abstract This paper presents an upscaling concept of swelling/shrinking processes of a compacted bentonite/sand mixture, which also applies to swelling of porous media in general. A constitutive approach for highly compacted bentonite/sand mixture is developed accordingly. The concept is based on the diffuse double layer theory and connects microstructural properties of the bentonite as well as chemical properties of the pore fluid with swelling potential. Main factors influencing the swelling potential of bentonite, i.e. variation of water content, dry density, chemical composition of pore fluid, as well as the microstructures and the amount of swelling minerals are taken into account. According to the proposed model, porosity is divided into interparticle and interlayer porosity. Swelling is the potential of interlayer porosity increase, which reveals itself as volume change in the case of free expansion, or turns to be swelling pressure in the case of constrained swelling. The constitutive equations for swelling/shrinking are implemented in the software GeoSys/RockFlow as a new chemo-hydro-mechanical model, which is able to simulate isothermal multiphase flow in bentonite. Details of the mathematical and numerical multiphase flow formulations, as well as the code implementation are described. The proposed model is verified using experimental data of tests on a highly compacted bentonite/sand mixture. Comparison of the 1D modelling results with the experimental data evidences the capability of the proposed model to satisfactorily predict free swelling of the material under investigation. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Nonparametric probabilistic approach of uncertainties for elliptic boundary value problem

    Christian SoizeArticle first published online: 2 FEB 200
    Abstract The paper is devoted to elliptic boundary value problems with uncertainties. Such a problem has already been analyzed in the context of the parametric probabilistic approach of system parameters uncertainties or for random media. Model uncertainties are induced by the mathematical,physical process, which allows the boundary value problem to be constructed from the design system. If experiments are not available, the Bayesian approach cannot be used to take into account model uncertainties. Recently, a nonparametric probabilistic approach of both the model uncertainties and system parameters uncertainties has been proposed by the author to analyze uncertain linear and non-linear dynamical systems. Nevertheless, the use of this concept that has to be developed for dynamical systems cannot directly be applied for elliptic boundary value problem, for instance, for a linear elastostatic problem relative to an elastic bounded domain. We then propose an extension of the nonparametric probabilistic approach in order to take into account model uncertainties for strictly elliptic boundary value problems. The theory and its validation are presented. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Non-local damage model based on displacement averaging

    M. Jirásek
    Abstract Continuum damage models describe the changes of material stiffness and strength, caused by the evolution of defects, in the framework of continuum mechanics. In many materials, a fast evolution of defects leads to stress,strain laws with softening, which creates serious mathematical and numerical problems. To regularize the model behaviour, various generalized continuum theories have been proposed. Integral-type non-local damage models are often based on weighted spatial averaging of a strain-like quantity. This paper explores an alternative formulation with averaging of the displacement field. Damage is assumed to be driven by the symmetric gradient of the non-local displacements. It is demonstrated that an exact equivalence between strain and displacement averaging can be achieved only in an unbounded medium. Around physical boundaries of the analysed body, both formulations differ and the non-local displacement model generates spurious damage in the boundary layers. The paper shows that this undesirable effect can be suppressed by an appropriate adjustment of the non-local weight function. Alternatively, an implicit gradient formulation could be used. Issues of algorithmic implementation, computational efficiency and smoothness of the resolved stress fields are discussed. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Some results on the accuracy of an edge-based finite volume formulation for the solution of elliptic problems in non-homogeneous and non-isotropic media

    Darlan Karlo Elisiário de Carvalho
    Abstract The numerical simulation of elliptic type problems in strongly heterogeneous and anisotropic media represents a great challenge from mathematical and numerical point of views. The simulation of flows in non-homogeneous and non-isotropic porous media with full tensor diffusion coefficients, which is a common situation associated with the miscible displacement of contaminants in aquifers and the immiscible and incompressible two-phase flow of oil and water in petroleum reservoirs, involves the numerical solution of an elliptic type equation in which the diffusion coefficient can be discontinuous, varying orders of magnitude within short distances. In the present work, we present a vertex-centered edge-based finite volume method (EBFV) with median dual control volumes built over a primal mesh. This formulation is capable of handling the heterogeneous and anisotropic media using structured or unstructured, triangular or quadrilateral meshes. In the EBFV method, the discretization of the diffusion term is performed using a node-centered discretization implemented in two loops over the edges of the primary mesh. This formulation guarantees local conservation for problems with discontinuous coefficients, keeping second-order accuracy for smooth solutions on general triangular and orthogonal quadrilateral meshes. In order to show the convergence behavior of the proposed EBFV procedure, we solve three benchmark problems including full tensor, material heterogeneity and distributed source terms. For these three examples, numerical results compare favorably with others found in literature. A fourth problem, with highly non-smooth solution, has been included showing that the EBFV needs further improvement to formally guarantee monotonic solutions in such cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Discussions on driven cavity flow

    Article first published online: 9 SEP 200, Ercan Erturk
    Abstract The widely studied benchmark problem, two-dimensional-driven cavity flow problem is discussed in detail in terms of physical and mathematical and also numerical aspects. A very brief literature survey on studies on the driven cavity flow is given. On the basis of several numerical and experimental studies, the fact of the matter is that physically the flow in a driven cavity is not two-dimensional above moderate Reynolds numbers. However, there exist numerical solutions for two-dimensional-driven cavity flow at high Reynolds numbers. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    On the effect of the local turbulence scales on the mixing rate of diffusion flames: assessment of two different combustion models

    Jose Lopes
    Abstract A mathematical model for the prediction of the turbulent flow, diffusion combustion process, heat transfer including thermal radiation and pollutants formation inside combustion chambers is described. In order to validate the model the results are compared herein against experimental data available in the open literature. The model comprises differential transport equations governing the above-mentioned phenomena, resulting from the mathematical and physical modelling, which are solved by the control volume formulation technique. The results yielded by the two different turbulent-mixing physical models used for combustion, the simple chemical reacting system (SCRS) and the eddy break-up (EBU), are analysed so that the need to make recourse to local turbulent scales to evaluate the reactants' mixing rate is assessed. Predictions are performed for a gaseous-fuelled combustor fired with two different burners that induce different aerodynamic conditions inside the combustion chamber. One of the burners has a typical geometry of that used in gaseous fired boilers,fuel firing in the centre surrounded by concentric oxidant firing,while the other burner introduces the air into the combustor through two different swirling concentric streams. Generally, the results exhibit a good agreement with the experimental values. Also, NO predictions are performed by a prompt-NO formation model used as a post-processor together with a thermal-NO formation model, the results being generally in good agreement with the experimental values. The predictions revealed that the mixture between the reactants occurred very close to the burner and almost instantaneously, that is, immediately after the fuel-containing eddies came into contact with the oxidant-containing eddies. As a result, away from the burner, the SCRS model, that assumes an infinitely fast mixing rate, appeared to be as accurate as the EBU model for the present predictions. Closer to the burner, the EBU model, that establishes the reactants mixing rate as a function of the local turbulent scales, yielded slightly slower rates of mixture, the fuel and oxidant concentrations which are slightly higher than those obtained with the SCRS model. As a consequence, the NO concentration predictions with the EBU combustion model are generally higher than those obtained with the SCRS model. This is due to the existence of higher concentrations of fuel and oxygen closer to the burner when predictions were performed taking into account the local turbulent scales in the mixing process of the reactants. The SCRS, being faster and as accurate as the EBU model in the predictions of combustion properties appears to be more appropriate. However, should NO be a variable that is predicted, then the EBU model becomes more appropriate. This is due to the better results of oxygen concentration yielded by that model, since it solves a transport equation for the oxidant concentration, which plays a dominant role in the prompt-NO formation rate. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Modeling hippocampal theta oscillation: Applications in neuropharmacology and robot navigation

    Tamás Kiss
    This article introduces a biologically realistic mathematical, computational model of theta (,5 Hz) rhythm generation in the hippocampal CA1 region and some of its possible further applications in drug discovery and in robotic/computational models of navigation. The model shown here uses the conductance-based description of nerve cells: Populations of basket cells, alveus/lacunosum-moleculare interneurons, and pyramidal cells are used to model the hippocampal CA1 and a fast-spiking GABAergic interneuron population for modeling the septal influence. Results of the model show that the septo-hippocampal feedback loop is capable of robust theta rhythm generation due to proper timing of pyramidal cells and synchronization within the basket cell network via recurrent connections. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 903,917, 2006. [source]

    Integral evaluation in semiconductor device modelling using simulated annealing with Bose,Einstein statistics

    E.A.B. Cole
    Abstract Fermi integrals arise in the mathematical and numerical modelling of microwave semiconductor devices. In particular, associated Fermi integrals involving two arguments arise in the modelling of HEMTs, in which quantum wells form at the material interfaces. The numerical evaluation of these associated integrals is time consuming. In this paper, these associated integrals are replaced by simpler functions which depend on a small number of optimal parameters. These parameters are found by optimizing a suitable cost function using a genetic algorithm with simulated annealing. A new method is introduced whereby the transition probabilities of the simulated annealing process are based on the Bose,Einstein distribution function, rather than on the more usual Maxwell,Boltzmann statistics or Tsallis statistics. Results are presented for the simulation of a four-layer HEMT, and show the effect of the approximation for the associated Fermi integrals. A comparison is made of the convergence properties of the three different statistics used in the simulated annealing process. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    A Three-Dimensional Quanititative Structure-Activity Relationship (3D-QSAR) Model for Predicting the Enantioselectivity of Candida antarctica Lipase B

    Paolo Braiuca
    Abstract Computational techniques involving molecular modeling coupled with multivariate statistical analysis were used to evaluate and predict quantitatively the enantioselectivity of lipase B from Candida antarctica (CALB). In order to allow the mathematical and statistical processing of the experimental data largely available in the literature (namely enantiomeric ratio E), a novel class of GRID-based molecular descriptors was developed (differential molecular interaction fields or DMIFs). These descriptors proved to be efficient in providing the structural information needed for computing the regression model. Multivariate statistical methods based on PLS (partial least square , projection to latent structures), were used for the analysis of data available from the literature and for the construction of the first three-dimensional quanititative structure-activity relationship (3D-QSAR) model able to predict the enantioselectivity of CALB. Our results indicate that the model is statistically robust and predictive. [source]

    Fourier analysis methodology of trabecular orientation measurement in the human tibial epiphysis

    JOURNAL OF ANATOMY, Issue 2 2001
    Methods to quantify trabecular orientation are crucial in order to assess the exact trajectory of trabeculae in anatomical and histological sections. Specific methods for evaluating trabecular orientation include the ,point counting' technique (Whitehouse, 1974), manual tracing of trabecular outlines on a digitising board (Whitehouse, 1980), textural analysis (Veenland et al. 1998), graphic representation of vectors (Shimizu et al. 1993; Kamibayashi et al. 1995) and both mathematical (Geraets, 1998) and fractal analysis (Millard et al. 1998). Optical and computer-assisted methods to detect trabecular orientation of bone using the Fourier transform were introduced by Oxnard (1982) later refined by Kuo & Carter (1991) (see also Oxnard, 1993, for a review), in the analysis of planar sections of vertebral bodies as well as in planar radiographs of cancellous bone in the distal radius (Wigderowitz et al. 1997). At present no studies have applied this technique to 2-D images or to the study of dried bones. We report a universal computer-automated technique for assessing the preferential orientation of the tibial subarticular trabeculae based on Fourier analysis, emphasis being placed on the search for improvements in accuracy over previous methods and applied to large stereoscopic (2-D) fields of anatomical sections of dried human tibiae. Previous studies on the trajectorial architecture of the tibial epiphysis (Takechi, 1977; Maquet, 1984) and research data about trabecular orientation (Kamibayashi et al. 1995) have not employed Fourier analysis. [source]

    Large-scale ecology and hydrology: an introductory perspective from the editors of the Journal of Applied Ecology

    S.J. Ormerod
    1. Five key features characterize large-scale factors in ecology: (a) they incorporate some of the most major of all ecological phenomena , the ranges of organisms, patterns of diversity, variations in ecosystem character and environmental processes such as climate, biogeochemical cycles, dispersal and migration; (b) they involve interactions across scales through both top-down and bottom-up processes; (c) they are multifaceted, and hence require an interdisciplinary perspective; (d) they reflect the cumulative effects of anthropogenic change across all scales, and so have direct relevance to environmental management; (e) they invariably exceed the range of classical ecological experiments, and so require alternative approaches to hypothesis testing. 2. Against this background, a recent research initiative on large-scale ecology and hydrology was funded jointly by the Natural Environment Research Council (NERC) and the Scottish Executive Rural Affairs Department (SERAD). Outputs from this programme are reported in this special issue of the Journal of Applied Ecology, and they illustrate some of the ecological research that is currently in progress in the UK at large spatio-temporal scales. 3. The spatial scales investigated in the papers range from hectares to whole continents, and much of the work reported here involves modelling. Although the model outputs are intrinsically valuable, several authors express the need for improved validation and testing. We suggest that this is an area requiring much development, and will need considerable innovation due to the difficulties at the scales involved (see 1d). Possible methods include: model applications to new circumstances; large-scale environmental manipulations; large-scale surveys that mimic experimental protocols; support from process studies at smaller scales. These alternatives are not mutually exclusive, and all can allow robust hypothesis testing. 4. Much of the work reported here is interdisciplinary linking, for example, geographical, mathematical, hydrological, hydrochemical and ecological concepts (see 1c). We suggest that even stronger links between environmental disciplines will further aid large-scale ecological research. 5. Most important in the context of the Journal of Applied Ecology, the work reported in this issue reveals that large-scale ecology already has applied value. Sectors benefiting include the conservation of biodiversity, the control of invasive species, and the management of land and water resources. 6. Large-scale issues continue to affect many applied ecologists, with roughly 30,40% of papers published in the Journal of Applied Ecology typically confronting such problems. This special issue adds to the growing body of seminal contributions that will add impetus to further large-scale work. Moreover, occurring in a period when other areas of biology are increasingly reductionist, this collection illustrates that, at least with respect to large-scale environmental problems, ecology still holds centre ground. [source]


    ABSTRACT The present study reports a simple method, both mathematical and experimental, to determine variable effective diffusion coefficients for sodium through the skins of olives. Skins removed from green olives, variety Arauco (also known as Criolla), were studied using a lye concentration of 2.25% (w/w) of NaOH at 20C. The diffusion of sodium was evaluated through fresh skins and previously alkali-treated skins. The measured values of effective diffusion coefficients for untreated (fresh) olive skins increased two orders of magnitude during the processing time, from 10,12 to 10,10 m2/s. Whereas, the effective diffusion coefficients determined for previously treated olive skins were of the order of 10,10 m2/s and increased very little with treatment time. [source]

    Deterministic global optimization of nonlinear dynamic systems

    AICHE JOURNAL, Issue 4 2007
    Youdong Lin
    Abstract A new approach is described for the deterministic global optimization of dynamic systems, including optimal control problems. The method is based on interval analysis and Taylor models and employs a type of sequential approach. A key feature of the method is the use of a new validated solver for parametric ODEs, which is used to produce guaranteed bounds on the solutions of dynamic systems with interval-valued parameters. This is combined with a new technique for domain reduction based on the use of Taylor models in an efficient constraint propagation scheme. The result is that an ,-global optimum can be found with both mathematical and computational certainty. Computational studies on benchmark problems are presented showing that this new approach provides significant improvements in computational efficiency, well over an order of magnitude in most cases, relative to other recently described methods. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source]

    Reliable computation of mixture critical points

    AICHE JOURNAL, Issue 1 2001
    Benito A. Stradi
    The determination of critical points of mixtures is important for both practical and theoretical reasons in modeling phase behavior, especially at high pressure. This article presents the first completely reliable method for locating all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. The methodology used is based on interval analysis, in particular an interval Newton/generalized bisection algorithm providing a mathematical and computational guarantee that all mixture critical points are located. The procedure is initialization-independent and thus requires no a priori knowledge of the number of mixture critical points or their approximate locations. The technique is illustrated using several example problems involving cubic equation-of-state models; however, the technique is for general purpose and can be applied in connection with other thermodynamic models. [source]

    Beauty and the Economist: The Role of Aesthetics in Economic Theory

    Cassey Lee
    SUMMARY The importance of aesthetic considerations is widely acknowledged in mathematics and the natural sciences. Beauty motivates mathematical and scientific discoveries and serves as a criterion for their acceptance by the scientific community. In contrast, there is little attention to beauty in the models, theorems and other objects of economic theory. This holds even though mathematics is an important tool of economic analysis. The pure theory of international trade provides useful examples to discuss the role of aesthetics in economic theory. The central feature of the discipline of economics which distinguishes it from the natural sciences and appears to explain the paucity of beauty in economics is that economic models lack generality. ZUSAMMENFASSUNG Die Bedeutung ästhetischer Überlegungen ist in der Mathematik und den Naturwissenschaften aner-kannt. Schönheit motiviert mathematische und naturwissenschaftliche Entdeckungen und dient als Kriterium für deren Akzeptanz in der wissenschaftlichen Gemeinschaft. In den Modellen, Theore-men und anderen Fragestellungen der ökonomischen Theorie wird hingegen kaum auf Schönheit ge-achtet; dies, obwohl Mathematik ein wichtiges Instrument der ökonomischen Analyse ist. Die reine Theorie des internationalen Handels bietet brauchbare Beispiele, um die Rolle der Ästhetik in der ökonomischen Theorie zu diskutieren. Was die Wirtschaftswissenschaften am deutlichsten von den Naturwissenschaften unterscheidet und den Mangel an Schönheit zu erklären scheint ist die Tatsache, dass ökonomische Modelle nicht allgemeingültig sind. [source]