Mathematical Functions (mathematical + function)

Distribution by Scientific Domains


Selected Abstracts


Computational form-finding of tension membrane structures,Non-finite element approaches: Part 1.

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2003
Use of cubic splines in finding minimal surface membranes
Abstract This paper, presented in three parts, discusses a computational methodology for form-finding of tension membrane structures (TMS), or fabric structures, used as roofing forms. The term ,form-finding' describes a process of finding the shape of a TMS under its initial tension. Such a shape is neither known a priori, nor can it be described by a simple mathematical function. The work is motivated by the need to provide an efficient numerical tool, which will allow a better integration of the design/analysis/manufacture of TMS. A particular category of structural forms is considered, known as minimal surface membranes (such as can be reproduced by soap films). The numerical method adopted throughout is dynamic relaxation (DR) with kinetic damping. Part 1 describes a new form-finding approach, based on the Laplace,Young equation and cubic spline fitting to give a full, piecewise, analytical description of a minimal surface. The advantages arising from the approach, particularly with regard to manufacture of cutting patterns for a membrane, are highlighted. Part 2 describes an alternative and novel form-finding approach, based on a constant tension field and faceted (triangular mesh) representation of the minimal surface. It presents techniques for controlling mesh distortion and discusses effects of mesh control on the accuracy and computational efficiency of the solution, as well as on the subsequent stages in design. Part 3 gives a comparison of the performance of the initial method (Part 1) and the faceted approximations (Part 2). Functional relations, which encapsulate the numerical efficiency of each method, are presented. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Computational form-finding of tension membrane structures,Non-finite element approaches: Part 2.

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2003
Triangular mesh discretization, control of mesh distortion in modelling minimal surface membranes
Abstract This paper, presented in three parts, discusses a computational methodology for form-finding of tension membrane structures (TMS), or fabric structures, used as roofing forms. The term ,form-finding' describes a process of finding the shape of a TMS under its initial tension. Such a shape is neither known a priori, nor can it be described by a simple mathematical function. The work is motivated by the need to provide an efficient numerical tool, which will allow a better integration of the design/analysis/manufacture of TMS. A particular category of structural forms is considered, known as minimal surface membranes (such as can be reproduced by soap films). The numerical method adopted throughout is dynamic relaxation (DR) with kinetic damping. Part 1 gave a background to the problem of TMS design, described the DR method, and presented a new form-finding methodology, based on the Laplace,Young equation and cubic spline fitting to give a full, piecewise, analytical description of the surface. Part 2 describes an alternative and novel form-finding method, based on a constant tension field and faceted (triangular mesh) representation of the minimal surface. Techniques for controlling mesh distortion are presented, and their effects on the accuracy and computational efficiency of the solution, as well as on the subsequent stages in design, are examined. Part 3 gives a comparison of the performance of the initial method (Part 1) and the faceted approximations (Part 2). Functional relations, which encapsulate the numerical efficiency of each method, are presented. Copyright © 2002 John Wiley & Sons, Ltd. [source]


gm -Extraction for rail-to-rail input stage linearization

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 6 2005
F. Palma
Abstract Transconductance of rail-to-rail input stages in low-voltage operational amplifiers depends on the presence of a large common mode input signal. Corrections must be implemented in order to correct it. Nevertheless, techniques actually used, based on switching or feedforward, still give relevant deviation from the constant transconductance condition. In this paper we present a new architecture based on extraction and feedback to the gain control, directly of the value of the transconductance of the amplifier to be controlled. This quantity does not contain the signal to be amplified, and thus once fed back, it does not affect the overall stage gain. A ,reciprocal' circuit, which performs the 1/x mathematical function, is introduced in order to achieve this extraction. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Plant traits as predictors of woody species dominance in climax forest communities

JOURNAL OF VEGETATION SCIENCE, Issue 3 2001
Fumito Koike
Satake et al. (1989) Abstract. The dominance of a given tree or shrub species in a particular forest community may be determined by many ecological traits of the target species, as well as those of the surrounding species as its potential competitors. The present study was conducted to evaluate the possibility of predicting community status (species composition and dominance) on the basis of traits of local flora using statistical methods, and to visualize the mathematical function which determines species dominance. A general linear model and logistic regression were used for the statistical analysis. Dependent variables were designated as dominance and presence/absence of species in climax forest, with independent variables as vegetative and reproductive traits. Subalpine, cool-temperate, warm-temperate and subtropical climax rain forests in East Asia were studied. Quantitative prediction of climax community status could readily be made based on easily measured traits of local flora. Species composition and 74.6% of the total variance of species dominance were predicted based on two traits; maximum height and shade tolerance. Through application of this method, the capacity of an alien species to invade a climax forest community could possibly be predicted prior to introduction of the alien species. [source]


Design and creation of an experimental program of advanced training in reconstructive microsurgery

MICROSURGERY, Issue 6 2006
Andrés R. Lorenzo M.D.
In this study, we design an experimental protocol for the purpose of enhancing performance in training in microsurgery. It is based on five free tissue transfer exercises in rat (epigastric cutaneous flap, saphenous fasciocutaneous flap, epigastric neurovascular flap, saphenous muscular flap, and hindlimb replantation), which simulate the principal clinical procedures of reconstructive microsurgery. The first part of the study consists of an anatomical review of the flaps of 5 rats and in the second part we have carried out the free transfer of flaps on 25 rats divided into 5 groups. To differentiate between them, we have created a mathematical function, referred to as difficulty in a microsurgical exercise, which has enabled us to establish a scale of progression for training, ranging form the easiest to the most difficult. As a conclusion, we believe that this protocol is a useful instrument as it allows for a more precise assessment of microsurgical capacity due to enhanced accuracy in the reproduction of global procedures and the fact that the quantification of progress in training is based on clinical monitoring after 7 days. © 2006 Wiley-Liss, Inc. Microsurgery, 2006. [source]


Technical note: Forearm pronation efficiency analysis in skeletal remains

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 3 2009
Ignasi Galtés
Abstract This work presents an original methodology for analyzing forearm-pronation efficiency from skeletal remains and its variation with regard to changes in the elbow position. The methodology is based on a biomechanical model that defines rotational efficiency as a mathematical function expressing a geometrical relationship between the origin and insertion of the pronator teres. The methodology uses humeral distal epiphysis photography, from which the geometrical parameters for the efficiency calculus can be obtained. Rotational efficiency is analyzed in a human specimen and in a living nonhuman hominoid (Symphalangus syndactylus) for a full elbow extension (180°) and an intermediate elbow position (90°). In both specimens, the results show that this rotational-efficiency parameter varies throughout the entire rotational range and show a dependency on the elbow joint position. The rotational efficiency of the siamang's pronator teres is less affected by flexion of the forearm than that of the human. The fact that forearm-pronation efficiency can be inferred, even quantified, allows us to interpret more precisely the functional and evolutionary significance of upper-limb skeletal design in extant and fossil primate taxa. Am J Phys Anthropol 2009. © 2009 Wiley-Liss, Inc. [source]


An Automatic Building Approach To Special Takagi-Sugeno Fuzzy Network For Unknown Plant Modeling And Stable Control

ASIAN JOURNAL OF CONTROL, Issue 2 2003
Chia-Feng Juang
ABSTRACT In previous studies, several stable controller design methods for plants represented by a special Takagi-Sugeno fuzzy network (STSFN) have been proposed. In these studies, the STSFN is, however, derived directly from the mathematical function of the controlled plant. For an unknown plant, there is a problem if STSFN cannot model the plant successfully. In order to address this problem, we have derived a learning algorithm for the construction of STSFN from input-output training data. Based upon the constructed STSFN, existing stable controller design methods can then be applied to an unknown plant. To verify this, stable fuzzy controller design by parallel distributed compensation (PDC) method is adopted. In PDC method, the precondition parts of the designed fuzzy controllers share the same fuzzy rule numbers and fuzzy sets as the STSFN. To reduce the controller rule number, the precondition part of the constructed STSFN is partitioned in a flexible way. Also, similarity measure together with merging operation between each neighboring fuzzy set are performed in each input dimension to eliminate the redundant fuzzy sets. The consequent parts in STSFN are designed by correlation measure to select only the significant input terms to participate in each rule's consequence and reduce the network parameters. Simulation results in the cart-pole balancing system have shown that with the proposed STSFN building approach, we are able to model the controlled plant with high accuracy and, in addition, can design a stable fuzzy controller with small parameter number. [source]


Teaching image processing: A two-step process

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2008
Clarence Han-Wei Yapp
Abstract An interactive program for teaching digital image processing techniques is presented in this article. Instead of heavy programming tasks and mathematical functions, students are led step by step through the exercises and then allowed to experiment. This article evaluates the proposed program and compares it with existing techniques. © 2008 Wiley Periodicals, Inc. Comput Appl Eng Educ 16: 211,222, 2008; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20149 [source]


Combined neural network model to compute wavelet coefficients

EXPERT SYSTEMS, Issue 3 2006
nan Güler
Abstract: In recent years a novel model based on artificial neural networks technology has been introduced in the signal processing community for modelling the signals under study. The wavelet coefficients characterize the behaviour of the signal and computation of the wavelet coefficients is particularly important for recognition and diagnostic purposes. Therefore, we dealt with wavelet decomposition of time-varying biomedical signals. In the present study, we propose a new approach that takes advantage of combined neural network (CNN) models to compute the wavelet coefficients. The computation was provided and expressed by applying the CNNs to ophthalmic arterial and internal carotid arterial Doppler signals. The results were consistent with theoretical analysis and showed good promise for discrete wavelet transform of the time-varying biomedical signals. Since the proposed CNNs have high performance and require no complicated mathematical functions of the discrete wavelet transform, they were found to be effective for the computation of wavelet coefficients. [source]


High-dimensional model representation for structural reliability analysis

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2009
Rajib Chowdhury
Abstract This paper presents a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry. The method involves high-dimensional model representation (HDMR) that facilitates lower-dimensional approximation of the original high-dimensional implicit limit state/performance function, response surface generation of HDMR component functions, and Monte Carlo simulation. HDMR is a general set of quantitative model assessment and analysis tools for capturing the high-dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher-order variable correlations are weak, allowing the physical model to be captured by the first few lower-order terms. Once the approximate form of the original implicit limit state/performance function is defined, the failure probability can be obtained by statistical simulation. Results of nine numerical examples involving mathematical functions and structural mechanics problems indicate that the proposed method provides accurate and computationally efficient estimates of the probability of failure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A generalized dimension-reduction method for multidimensional integration in stochastic mechanics

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2004
H. Xu
Abstract A new, generalized, multivariate dimension-reduction method is presented for calculating statistical moments of the response of mechanical systems subject to uncertainties in loads, material properties, and geometry. The method involves an additive decomposition of an N -dimensional response function into at most S -dimensional functions, where S,N; an approximation of response moments by moments of input random variables; and a moment-based quadrature rule for numerical integration. A new theorem is presented, which provides a convenient means to represent the Taylor series up to a specific dimension without involving any partial derivatives. A complete proof of the theorem is given using two lemmas, also proved in this paper. The proposed method requires neither the calculation of partial derivatives of response, as in commonly used Taylor expansion/perturbation methods, nor the inversion of random matrices, as in the Neumann expansion method. Eight numerical examples involving elementary mathematical functions and solid-mechanics problems illustrate the proposed method. Results indicate that the multivariate dimension-reduction method generates convergent solutions and provides more accurate estimates of statistical moments or multidimensional integration than existing methods, such as first- and second-order Taylor expansion methods, statistically equivalent solutions, quasi-Monte Carlo simulation, and the fully symmetric interpolatory rule. While the accuracy of the dimension-reduction method is comparable to that of the fourth-order Neumann expansion method, a comparison of CPU time suggests that the former is computationally far more efficient than the latter. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Mathematical modeling of 13C label incorporation of the TCA cycle: The concept of composite precursor function

JOURNAL OF NEUROSCIENCE RESEARCH, Issue 15 2007
Kai Uffmann
Abstract A novel approach for the mathematical modeling of 13C label incorporation into amino acids via the TCA cycle that eliminates the explicit calculation of the labeling of the TCA cycle intermediates is described, resulting in one differential equation per measurable time course of labeled amino acid. The equations demonstrate that both glutamate C4 and C3 labeling depend in a predictible manner on both transmitochondrial exchange rate, VX, and TCA cycle rate, VTCA. For example, glutamate C4 labeling alone does not provide any information on either VX or VTCA but rather a composite "flux". Interestingly, glutamate C3 simultaneously receives label not only from pyruvate C3 but also from glutamate C4, described by composite precursor functions that depend in a probabilistic way on the ratio of VX to VTCA: An initial rate of labeling of glutamate C3 (or C2) being close to zero is indicative of a high VX/VTCA. The derived analytical solution of these equations shows that, when the labeling of the precursor pool pyruvate reaches steady state quickly compared with the turnover rate of the measured amino acids, instantaneous labeling can be assumed for pyruvate. The derived analytical solution has acceptable errors compared with experimental uncertainty, thus obviating precise knowledge on the labeling kinetics of the precursor. In conclusion, a substantial reformulation of the modeling of label flow via the TCA cycle turnover into the amino acids is presented in the current study. This approach allows one to determine metabolic rates by fitting explicit mathematical functions to measured time courses. © 2007 Wiley-Liss, Inc. [source]


Statistical evaluation of time-dependent metabolite concentrations: estimation of post-mortem intervals based on in situ1H-MRS of the brain

NMR IN BIOMEDICINE, Issue 3 2005
Eva Scheurer
Abstract Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915,920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250,h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Reduction of bias in static closed chamber measurement of ,13C in soil CO2 efflux

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 2 2010
K. E. Anders Ohlsson
The 13C/12C ratio of soil CO2 efflux (,e) is an important parameter in studies of ecosystem C dynamics, where the accuracy of estimated C flux rates depends on the measurement uncertainty of ,e. The static closed chamber method is frequently used in the determination of ,e, where the soil CO2 efflux is accumulated in the headspace of a chamber placed on top of the soil surface. However, it has recently been shown that the estimate of ,e obtained by using this method could be significantly biased, which potentially diminish the usefulness of ,e for field applications. Here, analytical and numerical models were used to express the bias in ,e as mathematical functions of three system parameters: chamber height (H), chamber radius (Rc), and soil air-filled porosity (,). These expressions allow optimization of chamber size to yield a bias, which is at a level suitable for each particular application of the method. The numerical model was further used to quantify the effects on the ,e bias from (i) various designs for sealing of the chamber to ground, and (ii) inclusion of the commonly used purging step for reduction of the initial headspace CO2 concentration. The present modeling work provided insights into the effects on the ,e bias from retardation and partial chamber bypass of the soil CO2 efflux. The results presented here supported the continued use of the static closed chamber method for the determination of ,e, with improved control of the bias component of its measurement uncertainty. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Usefulness of Nonlinear Analysis of ECG Signals for Prediction of Inducibility of Sustained Ventricular Tachycardia by Programmed Ventricular Stimulation in Patients with Complex Spontaneous Ventricular Arrhythmias

ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 3 2008
Ornella Durin M.D.
Introduction: The aim of our study was to assess the effectiveness of the nonlinear analysis (NLA) of ECG in predicting the results of invasive electrophysiologic study (EPS) in patients with ventricular arrhythmias. Methods: We evaluated 25 patients with history of cardiac arrest, syncope, sustained, or nonsustained ventricular tachycardia (VT). All patients underwent electrophysiologic study (EPS) and nonlinear analysis (NLA) of ECG. The study group was compared with a control group of 25 healthy subjects, in order to define the normal range of NLA. ECG was processed in order to obtain numerical values, which were analyzed by nonlinear mathematical functions. Patients were classified through the application of a clustering procedure to the whole set of functions, and the correlation between the results of nonlinear analysis of ECG and EPS was tested. Results: NLA assigned all patients with negative EPS to the same class of healthy subjects, whereas the patients in whom VT was inducible had been correctly and clearly isolated into a separate cluster. In our study, the result of NLA with application of the clustering technique was significantly correlated to that of EPS (P < 0.001), and was able to predict the result of EPS, with a negative predictive value of 100% and a positive predictive value of 100%. Conclusions: NLA can predict the results of EPS with good negative and positive predictive value. However, further studies are needed in order to verify the usefulness of this noninvasive tool for sudden death risk stratification in patients with ventricular arrhythmias. [source]


Theory & Methods: On the importance of being smooth

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2002
B. M. Brown
This paper makes the proposition that the only statistical analyses to achieve widespread popular use in statistical practice are those whose formulations are based on very smooth mathematical functions. The argument is made on an empirical basis, through examples. Given the truth of the proposition, the question ,why should it be so?' is intriguing, and any discussion has to be speculative. To aid that discussion, the paper starts with a list of statistical desiderata, with the view of seeing what properties are provided by underlying smoothness. This provides some rationale for the proposition. After that, the examples are considered. Methods that are widely used are listed, along with other methods which, despite impressive properties and possible early promise, have languished in the arena of practical application. Whatever the underlying causes may be, the proposition carries a worthwhile message for the formulation of new statistical methods, and for the adaptation of some of the old ones. [source]


Reference values for change in body mass index from birth to 18 years of age

ACTA PAEDIATRICA, Issue 6 2003
J Karlberg
Body mass index (BMI) has become the measure of choice for determination of nutritional status during the paediatric years, as in adults. Recently, several cross-sectional BMI childhood reference values standards have been published. In order precisely to evaluate childhood nutritional interventions, reference values allowing for the evaluation of changes in BMI values are also needed. For the first time, such reference values can be presented based on 3650 longitudinally followed healthy Swedish children born full term. The reference values for the change in BMI are given as the change in BMI standard deviation scores. The reference values are given as means of mathematical functions adjusting for gender, age of the child and the length of the interval between two measurements for interval lengths of 0.25 to 1.0 y before 2y of age and of 1 to 5 y between birth and 18 y. The usefulness of the reference values is proved by a graph that forms a part of a clinical computer program; the -2 to +2 standard deviation range of the predicted change in BMI can be computed for an individual child and drawn in the graph as an extended support for clinical decision-making. Conclusion: For the first time this communication gives access to BMI growth rate values that can be used both in research and in the clinic to evaluate various interventions, be they nutritional, surgical or therapeutic. [source]