Mathematical Framework (mathematical + framework)

Distribution by Scientific Domains

Selected Abstracts

Mathematical framework towards the analysis of a generic traffic marker

Nasser-Eddine Rikli
Abstract DiffServ architecture has been widely adopted for the provision of QoS over the Internet. This makes the full understanding of its operation imperative. We believe that only mathematical analysis may have the power of such goal. As the heart of a DiffServ router is the token bucket algorithm, a generic one, with two-colours marking, is to be considered here. A mathematical framework will be first developed for its analysis. Then, assuming an input traffic with Poisson arrivals and Exponential packet lengths, and a memoryless token bucket system, the two types of generated streams will be statistically characterized through their distributions and averages. This analysis will be carried out for two types of buckets, one with infinite size and a second with finite size. It will be shown how the derived equations will allow the prediction of the output traffic streams for given bucket and input traffic stream parameters. The paper will be then complemented by conclusions and suggestions. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Robust active vibration suppression control with constraint on the control signal: application to flexible structures

A. Forrai
Abstract A unified mathematical framework, sustained by experimental results, is presented for robust controller design taking into account the constraint on the control signal. The design procedure is exemplified for an active vibration suppression control problem with applications to flexible structures. The considered experimental set-up is a three-storey flexible structure with an active mass driver placed on the last storey. First, the considered flexible structure is identified and the model's parametric uncertainties are deduced. Next, control constraints are presented for the robust control design problem, taking into account the restriction imposed on the control signal. Finally, the effectiveness of the control system is tested through experiments, when the input disturbance is assumed to be a sinusoidal one as well as a historical earthquake record (1940 El Centro record). Copyright © 2003 John Wiley & Sons, Ltd. [source]

Ecological control analysis: being(s) in control of mass flux and metabolite concentrations in anaerobic degradation processes

Wilfred F. M. Röling
Summary Identification of the functional groups of microorganisms that are predominantly in control of fluxes through, and concentrations in, microbial networks would benefit microbial ecology and environmental biotechnology: the properties of those controlling microorganisms could be studied or monitored specifically or their activity could be modulated in attempts to manipulate the behaviour of such networks. Herein we present ecological control analysis (ECA) as a versatile mathematical framework that allows for the quantification of the control of each functional group in a microbial network on its process rates and concentrations of intermediates. In contrast to current views, we show that rates of flow of matter are not always limited by a single functional group; rather flux control can be distributed over several groups. Also, control over intermediate concentrations is always shared. Because of indirect interactions, through other functional groups, the concentration of an intermediate can also be controlled by functional groups not producing or consuming it. Ecological control analysis is illustrated by a case study on the anaerobic degradation of organic matter, using experimental data obtained from the literature. During anaerobic degradation, fermenting microorganisms interact with terminal electron-accepting microorganisms (e.g. halorespirers, methanogens). The analysis indicates that flux control mainly resides with fermenting microorganisms, but can shift to the terminal electron-accepting microorganisms under less favourable redox conditions. Paradoxically, halorespiring microorganisms do not control the rate of perchloroethylene and trichloroethylene degradation even though they catalyse those processes themselves. [source]

Phase-coupled oscillator models can predict hippocampal inhibitory synaptic connections

F. K. Skinner
Abstract What factors are responsible for propagating electrical activity in the hippocampus? Using an intact, isolated hippocampus preparation, it is possible to observe spontaneous delta (, 4 Hz) waves of rhythmic field potentials. These rhythmic potentials are inhibitory in nature, mediated by GABAergic inhibitory potentials originating from a population of principal neurons. They start in the ventro-temporal region and move longitudinally towards the dorso-septal region with a phase lag of , 10% between the extracellular recordings. We use the mathematical framework of phase-coupled oscillators (PCO) to gain some insight into the underlying network system. A chain of 15 nearest-neighbour bidirectionally coupled PCOs is used where each oscillator refers to a segment of the CA1 region of the hippocampus that can generate these slow field potentials. We find that ventro-dorsal delta waves exist if there is a dominance in coupling strength in one direction. Without a one-way coupling dominance, ventro-dorsal waves can still exist, but then the coupling strengths need to be much larger. The relationship between entrained and intrinsic frequencies and the variation of propagation speeds along the longitudinal axis can be used to determine which case applies. Currently available experimental data supports one of the cases, predicting that there is a stronger ventral to dorsal inhibitory effect. [source]


EVOLUTION, Issue 2 2010
Arne Traulsen
Evolutionary game theory is a general mathematical framework that describes the evolution of social traits. This framework forms the basis of many multilevel selection models and is also frequently used to model evolutionary dynamics on networks. Kin selection, which was initially restricted to describe social interactions between relatives, has also led to a broader mathematical approach, inclusive fitness, that can not only describe social evolution among relatives, but also in group structured populations or on social networks. It turns out that the underlying mathematics of game theory is fundamentally different from the approach of inclusive fitness. Thus, both approaches,evolutionary game theory and inclusive fitness,can be helpful to understand the evolution of social traits in group structured or spatially extended populations. [source]

Hyporheic Exchange in Mountain Rivers I: Mechanics and Environmental Effects

Daniele Tonina
Hyporheic exchange is the mixing of surface and shallow subsurface water through porous sediment surrounding a river and is driven by spatial and temporal variations in channel characteristics (streambed pressure, bed mobility, alluvial volume and hydraulic conductivity). The significance of hyporheic exchange in linking fluvial geomorphology, groundwater, and riverine habitat for aquatic and terrestrial organisms has emerged in recent decades as an important component of conserving, managing, and restoring riverine ecosystems. Here, we review the causes and environmental effects of hyporheic exchange, and provide a simple mathematical framework for examining the mechanics of exchange. A companion paper explores the potential effects of channel morphology on exchange processes and the hyporheic environments that may result in mountain basins (Buffington and Tonina 2009). [source]

Bearing capacity of shallow foundations in transversely isotropic granular media

A. Azami
Abstract The main focus in this work is on the assessment of bearing capacity of a shallow foundation in an inherently anisotropic particulate medium. Both the experimental and numerical investigations are carried out using a crushed limestone with elongated angular-shaped aggregates. The experimental study involves small-scale model tests aimed at examining the variation of bearing capacity as a function of the angle of deposition of the material. In addition, the results of a series of triaxial and direct shear tests are presented and later employed to identify the material functions/parameters. The numerical part of this work is associated with the development and implementation of a constitutive framework that describes the mechanical response of transversely isotropic frictional materials. The framework is based on the elastoplasticity and accounts for the effects of strain localization and inherent anisotropy of both the deformation and strength characteristics. The results of numerical simulations are compared withthe experimental data. A parametric study is also carried out aimed at examining the influence of various simplifications in the mathematical framework on its predictive abilities. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Design, analysis, and synthesis of generalized single step single solve and optimal algorithms for structural dynamics

X. Zhou
Abstract The primary objectives of the present exposition are to: (i) provide a generalized unified mathematical framework and setting leading to the unique design of computational algorithms for structural dynamic problems encompassing the broad scope of linear multi-step (LMS) methods and within the limitation of the Dahlquist barrier theorem (Reference [3], G. Dahlquist, BIT 1963; 3: 27), and also leading to new designs of numerically dissipative methods with optimal algorithmic attributes that cannot be obtained employing existing frameworks in the literature, (ii) provide a meaningful characterization of various numerical dissipative/non-dissipative time integration algorithms both new and existing in the literature based on the overshoot behavior of algorithms leading to the notion of algorithms by design, (iii) provide design guidelines on selection of algorithms for structural dynamic analysis within the scope of LMS methods. For structural dynamics problems, first the so-called linear multi-step methods (LMS) are proven to be spectrally identical to a newly developed family of generalized single step single solve (GSSSS) algorithms. The design, synthesis and analysis of the unified framework of computational algorithms based on the overshooting behavior, and additional algorithmic properties such as second-order accuracy, and unconditional stability with numerical dissipative features yields three sub-classes of practical computational algorithms: (i) zero-order displacement and velocity overshoot (U0-V0) algorithms; (ii) zero-order displacement and first-order velocity overshoot (U0-V1) algorithms; and (iii) first-order displacement and zero-order velocity overshoot (U1-V0) algorithms (the remainder involving high-orders of overshooting behavior are not considered to be competitive from practical considerations). Within each sub-class of algorithms, further distinction is made between the design leading to optimal numerical dissipative and dispersive algorithms, the continuous acceleration algorithms and the discontinuous acceleration algorithms that are subsets, and correspond to the designed placement of the spurious root at the low-frequency limit or the high-frequency limit, respectively. The conclusion and design guidelines demonstrating that the U0-V1 algorithms are only suitable for given initial velocity problems, the U1-V0 algorithms are only suitable for given initial displacement problems, and the U0-V0 algorithms are ideal for either or both cases of given initial displacement and initial velocity problems are finally drawn. For the first time, the design leading to optimal algorithms in the context of a generalized single step single solve framework and within the limitation of the Dahlquist barrier that maintains second-order accuracy and unconditional stability with/without numerically dissipative features is described for structural dynamics computations; thereby, providing closure to the class of LMS methods. Copyright © 2003 John Wiley & Sons, Ltd. [source]

Mathematical framework towards the analysis of a generic traffic marker

Nasser-Eddine Rikli
Abstract DiffServ architecture has been widely adopted for the provision of QoS over the Internet. This makes the full understanding of its operation imperative. We believe that only mathematical analysis may have the power of such goal. As the heart of a DiffServ router is the token bucket algorithm, a generic one, with two-colours marking, is to be considered here. A mathematical framework will be first developed for its analysis. Then, assuming an input traffic with Poisson arrivals and Exponential packet lengths, and a memoryless token bucket system, the two types of generated streams will be statistically characterized through their distributions and averages. This analysis will be carried out for two types of buckets, one with infinite size and a second with finite size. It will be shown how the derived equations will allow the prediction of the output traffic streams for given bucket and input traffic stream parameters. The paper will be then complemented by conclusions and suggestions. Copyright © 2006 John Wiley & Sons, Ltd. [source]

On open-set lattices and some of their applications in semantics

Mouw-Ching Tjiok
In this article, we present the theory of Kripke semantics, along with the mathematical framework and applications of Kripke semantics. We take the Kripke-Sato approach to define the knowledge operator in relation to Hintikka's possible worlds model, which is an application of the semantics of intuitionistic logic and modal logic. The applications are interesting from the viewpoint of agent interactives and process interaction. We propose (i) an application of possible worlds semantics, which enables the evaluation of the truth value of a conditional sentence without explicitly defining the operator "," (implication), through clustering on the space of events (worlds) using the notion of neighborhood; and (ii) a semantical approach to treat discrete dynamic process using Kripke-Beth semantics. Starting from the topological approach, we define the measure-theoretical machinery, in particular, we adopt the methods developed in stochastic process,mainly the martingale,to our semantics; this involves some Boolean algebraic (BA) manipulations. The clustering on the space of events (worlds), using the notion of neighborhood, enables us to define an accessibility relation that is necessary for the evaluation of the conditional sentence. Our approach is by taking the neighborhood as an open set and looking at topological properties using metric space, in particular, the so-called ,-ball; then, we can perform the implication by computing Euclidean distance, whenever we introduce a certain enumerative scheme to transform the semantic objects into mathematical objects. Thus, this method provides an approach to quantify semantic notions. Combining with modal operators Ki operating on E set, it provides a more-computable way to recognize the "indistinguishability" in some applications, e.g., electronic catalogue. Because semantics used in this context is a local matter, we also propose the application of sheaf theory for passing local information to global information. By looking at Kripke interpretation as a function with values in an open-set lattice ,,U, which is formed by stepwise verification process, we obtain a topological space structure. Now, using the measure-theoretical approach by taking the Borel set and Borel function in defining measurable functions, this can be extended to treat the dynamical aspect of processes; from the stochastic process, considered as a family of random variables over a measure space (the probability space triple), we draw two strong parallels between Kripke semantics and stochastic process (mainly martingales): first, the strong affinity of Kripke-Beth path semantics and time path of the process; and second, the treatment of time as parametrization to the dynamic process using the technique of filtration, adapted process, and progressive process. The technique provides very effective manipulation of BA in the form of random variables and ,-subalgebra under the cover of measurable functions. This enables us to adopt the computational algorithms obtained for stochastic processes to path semantics. Besides, using the technique of measurable functions, we indeed obtain an intrinsic way to introduce the notion of time sequence. © 2003 Wiley Periodicals, Inc. [source]

Hybrid modeling of inulinase bio-production process

Marcio A. Mazutti
Abstract BACKGROUND: A potential application of inulinase in the food industry is the production of fructooligosaccharides (FOS) through transfructosilation of sucrose. Besides their ability to increase the shelf-life and flavor of many products, FOS have many interesting functional properties. The use of an industrial medium may represent a good, cost-effective alternative to produce inulinase, since the activity of the enzyme produced may be improved or at least remain the same compared with that obtained using a synthetic medium. Thus, inulinase production for use in FOS synthesis is of considerable scientific and technological appeal, as is the development of a reliable mathematical model of the process. This paper describes a hybrid neural network approach to model inulinase production in a batch bioreactor using agroindustrial residues as substrate. The hybrid modeling makes use of a series artificial neural network to estimate the kinetic parameters of the process and the mass balance as constitutive equations. RESULTS: The proposed model was shown to be capable of describing the complex behavior of inulinase production employing agroindustrial residues as substrate, so that the mathematical framework developed is a useful tool for simulation of this process. CONCLUSION: The hybrid neural network model developed was shown to be an interesting alternative to estimate model parameters since complete elucidation of the phenomena and mechanisms involved in the fermentation is not required owing to the black-box nature of the ANN used as parameter estimator. Copyright © 2010 Society of Chemical Industry [source]

On quantum statistical inference

Ole E. Barndorff-Nielsen
Summary. Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and interrelates some new concepts for an extension of classical statistical inference to the quantum context. [source]

In situ Pressure Fluctuations of Polymer Melt Flow Instabilities: Experimental Evidence about their Origin and Dynamics

Humberto Palza
Abstract Despite the practical importance of polymer melt instabilities, there is still a lack of experiments able to characterize in situ the origin and behavior of these phenomena. In this context, a new set-up consisting of high sensitive pressure transducers located inside a slit-die and an advanced mathematical framework to process in situ measurements of polymer melt instabilities, are developed and applied. Our results show for the first time that pressure oscillations can actually be detected inside the die under sharkskin conditions. This originates from a factor of 103 and 102 improvement in terms of time and pressure resolution. Furthermore, new evidence towards the propagation of the slip phenomena along the die in spurt instabilities are found. [source]

Single-step nonlinear diffusion tensor estimation in the presence of microscopic and macroscopic motion,

Murat Aksoy
Abstract Patient motion can cause serious artifacts in diffusion tensor imaging (DTI), diminishing the reliability of the estimated diffusion tensor information. Studies in this field have so far been limited mainly to the correction of miniscule physiological motion. In order to correct for gross patient motion it is not sufficient to correct for misregistration between successive shots; the change in the diffusion-encoding direction must also be accounted for. This becomes particularly important for multishot sequences, whereby,in the presence of motion,each shot is encoded with a different diffusion weighting. In this study a general mathematical framework to correct for gross patient motion present in a multishot and multicoil DTI scan is presented. A signal model is presented that includes the effect of rotational and translational motion in the patient frame of reference. This model was used to create a nonlinear least-squares formulation, from which the diffusion tensors were obtained using a nonlinear conjugate gradient algorithm. Applications to both phantom simulations and in vivo studies showed that in the case of gross motion the proposed algorithm performs superiorly compared to conventional methods used for tensor estimation. Magn Reson Med 59:1138,1150, 2008. © 2008 Wiley-Liss, Inc. [source]

Analytical results for 2-D non-rectilinear waveguides based on a Green's function

Giulio Ciraolo
Abstract We consider the problem of wave propagation for a 2-D rectilinear optical waveguide which presents some perturbation. We construct a mathematical framework to study such a problem and prove the existence of a solution for the case of small imperfections. Our results are based on the knowledge of a Green's function for the rectilinear case. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Molecular ecology of social behaviour: analyses of breeding systems and genetic structure

Kenneth G. Ross
Abstract Molecular genetic studies of group kin composition and local genetic structure in social organisms are becoming increasingly common. A conceptual and mathematical framework that links attributes of the breeding system to group composition and genetic structure is presented here, and recent empirical studies are reviewed in the context of this framework. Breeding system properties, including the number of breeders in a social group, their genetic relatedness, and skew in their parentage, determine group composition and the distribution of genetic variation within and between social units. This group genetic structure in turn influences the opportunities for conflict and cooperation to evolve within groups and for selection to occur among groups or clusters of groups. Thus, molecular studies of social groups provide the starting point for analyses of the selective forces involved in social evolution, as well as for analyses of other fundamental evolutionary problems related to sex allocation, reproductive skew, life history evolution, and the nature of selection in hierarchically structured populations. The framework presented here provides a standard system for interpreting and integrating genetic and natural history data from social organisms for application to a broad range of evolutionary questions. [source]

A unified mathematical framework for the measurement of richness and evenness within and among multiple communities

OIKOS, Issue 2 2004
Thomas D. Olszewski
Biodiversity can be divided into two aspects: richness (the number of species or other taxa in a community or sample) and evenness (a measure of the distribution of relative abundances of different taxa in a community or sample). Sample richness is typically evaluated using rarefaction, which normalizes for sample size. Evenness is typically summarized in a single value. It is shown here that Hurlbert's probability of interspecific encounter (,1), a commonly used sample-size independent measure of evenness, equals the slope of the steepest part of the rising limb of a rarefaction curve. This means that rarefaction curves provide information on both aspects of diversity. In addition, regional diversity (gamma) can be broken down into the diversity within local communities (alpha) and differences in taxonomic composition among local communities (beta). Beta richness is expressed by the difference between the composite rarefaction curve of all samples in a region with the collector's curve for the same samples. The differences of the initial slopes of these two curves reflect the beta evenness thanks to the relationship between rarefaction and ,1. This relationship can be further extended to help interpret species-area curves (SAC's). As previous authors have described, rarefaction provides the null hypothesis of passive sampling for SAC's, which can be interpreted as regional collector's curves. This allows evaluation of richness and evenness at local and regional scales using a single family of well-established, mathematically related techniques. [source]

Bound and unextractable pesticidal plant residues: chemical characterization and consumer exposure

Heinrich Sandermann Jr
Abstract Plants are well known to incorporate pesticides into bound and unextractable residues that resist solubilization in common laboratory solvents and are therefore not accessible to standard residue analysis. A characterization of such residues has been proposed for incorporation rates above trigger values of 0.05 mg kg,1 parent pesticide equivalents, or percentage values of 10% (United States Environmental Protection Agency, 1995) or 25% (Commission of the European Communities, 1997) of the total radioactive residue. These trigger values are often exceeded. The present review describes the current status of the chemical characterization and animal bioavailability of bound and unextractable residues that may be xenobiotic in nature or result from natural recycling of simple degradation products. The latter case represents a mechanism of detoxification. Bound residues have been shown to be covalent or non-covalent in nature. With regard to the plant matrix molecules involved, incorporation into proteins, lignins, pectins, hemicelluloses and cutins has been demonstrated, and four covalent linkage types are known. Animal feeding experiments have revealed cases of low as well as high bioavailability. Many of the studies are limited by experimental uncertainties and by results only being reported as relative percentage values rather than absolute exposure. A preliminary value of absolute exposure from bound and unextractable residues is derived here for the first time from eight case studies. The mean exposure (ca 1.5 mg kg,1 pesticidal equivalents) exceeds some of the existing maximum residue levels (MRLs) of residual free pesticides that are typically in the range of 0.05,1 mg kg,1. A mathematical framework for the correction of current maximum residue levels is presented for cases of highly bioavailable bound residues. As bound pesticidal residues in food plants could represent a source of significant consumer exposure, an experimental test scheme is proposed here. It consists of basic chemical characterization, model digestibility tests and, in exceptional cases, animal bioavailability and additional toxicological studies. Copyright © 2004 Society of Chemical Industry [source]

Differential geometry: a natural tool for describing symmetry operations

Philippe Kocian
Differential geometry provides a useful mathematical framework for describing the fundamental concepts in crystallography. The notions of point and associated vector spaces correspond to those of manifold and tangent space at a given point. A space-group operation is a one-to-one map acting on the manifold, whereas a point-group operation is a linear map acting between two tangent spaces of the manifold. Manifold theory proves particularly powerful as a unified formalism describing symmetry operations of conventional as well as modulated crystals without requiring a higher-dimensional space. We show, in particular, that a modulated structure recovers a three-dimensional periodicity in any tangent space and that its point group consists of linear applications. [source]