Calculation Methods (calculation + methods)

Distribution by Scientific Domains

Kinds of Calculation Methods

  • other calculation methods


  • Selected Abstracts


    Numerical derivation of contact mechanics interface laws using a finite element approach for large 3D deformation

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2004
    Alex Alves Bandeira
    Abstract In this work a homogenization method is presented to obtain by numerical simulation interface laws for normal contact pressure based on statistical surface models. For this purpose and assuming elastic behaviour of the asperities, the interface law of Kragelsky et al. (Friction and Wear,Calculation Methods, Pergamon, 1982) is chosen for comparison. The non-penetration condition and interface models for contact that take into account the surface micro-structure are investigated in detail. A theoretical basis for the three-dimensional contact problem with finite deformations is shortly presented. The augmented Lagrangian method is then used to solve the contact problem with friction. The algorithms for frictional contact are derived based on a slip rule using backward Euler integration like in plasticity. Special attention was dedicated to the consistent derivation of the contact equations between finite element surfaces. A matrix formulation for a node-to-surface contact element is derived consisting of a master surface segment with four nodes and a contacting slave node. It was also necessary to consider the special cases of node-to-edge contact and node-to-node contact in order to achieve the desired asymptotic quadratic convergence in the Newton method. A numerical example is selected to show the ability of the contact formulation and the algorithm to represent interface law for rough surfaces. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Confidence Interval Calculation Methods Are Infrequently Reported in Emergency-medicine Literature

    ACADEMIC EMERGENCY MEDICINE, Issue 1 2007
    Amy Marr MD
    Abstract Background There are many different confidence interval calculation methods, each providing different as well as in some cases inadequate interval estimates. Readers who know which method is used are better able to understand potentially significant limitations in study reports. Objectives To quantify how often confidence interval calculation methods are disclosed by authors in four peer-reviewed North American emergency-medicine journals. Methods The authors independently performed searches of four journals for all studies in which comparisons were made between means, medians, proportions, odds ratios, or relative risks. Case reports, editorials, subject reviews, and letters were excluded. Using a standardized abstraction form developed on a spreadsheet, the authors evaluated each article for the reporting of confidence intervals and evaluated the description of methodology used to calculate the confidence intervals. Results A total of 212 articles met the inclusion criteria. Confidence intervals were reported in 123 articles (58%; 95% CI = 51% to 64%); of these, a description of methodology was reported in 12 (9.8%; 95% CI = 5.7% to 16%). Conclusions Confidence interval methods of calculation are disclosed infrequently in emergency medicine literature. [source]


    Restoration of degraded moving image for predicting a moving object

    ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 2 2009
    Kei Akiyama
    Abstract Iterative optimal calculation methods have been proposed for degraded static image restoration based on the multiresolution wavelet decomposition. However, it is quite difficult to apply these methods to process moving images due to the high computational cost. In this paper, we propose an effective restoration method for degraded moving images by modeling the motion of moving object and predicting the future object position. We verified our method by computer simulations and experiments to show that our method can achieve favorable results. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 92(2): 38,48, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10013 [source]


    Development and application of a fatty acid based microbial community structure similarity index

    ENVIRONMETRICS, Issue 4 2002
    Alan Werker
    Abstract This article presents an index of similarity that has application in monitoring relative changes of complex microbial communities for the purpose of understanding the impact of community instability in biological wastewater treatment systems. Gas chromatographic data quantifying microbial fatty acid esters extracted from biosolids samples can be used to infer the occurrence of changes in mixed culture community structure. One approach to rapidly assess the relative dissimilarity between samples is to calculate a similarity index scaled between 0 and 1. The many arbitrary scales that are associated with the available calculation methods for similarity indices limits the extent of application. Therefore, a specialized index of similarity was derived from consideration of the measurement errors associated with the chromatographic data. The resultant calculation method provides a clear mechanism for calibrating the sensitivity of the similarity index, such that inherent measurement variability is accommodated and standardization of scaling is achieved. The similarity index sensitivity was calibrated with respect to an effective gas chromatographic peak coefficient of variation, and this calibration was particularly important for facilitating comparisons made between different systems or experiments. The proposed index of similarity was tested with data acquired from a recently completed study of contaminant removal from pulp mill wastewater. The results suggest that this index can be used as a screening tool to rapidly process microbial fatty acid (MFA) compositional data, with the objective of making preliminary identification of underlying trends in (MFA) community structure, over time or between experimental conditions. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Fast analytical short-circuit current calculation of rectifier-fed auxiliary subsystems

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 3 2003
    M. Kunz
    The course of time of a three-phase rectifier system which its alternating valve participation can be interpreted as a continuous sequence of alternating switching states. To allow a more convenient calculation, the substitutional circuit with the converter is transformed into state-space coordinates. Hereby each operational mode of the rectifier can be represented by two linear independent space-phasor component networks. In the state-space, an analytical solution for this boundary value system can be carried out. After a retransformation back into the time domain, its time functions can be derived. In contrast to other calculation methods, no assumptions or simplifications have to be made like ideal smooth DC currents. Furthermore, all states of operation of the rectifier bridge can be easily calculated, which cover DC side idle-running to DC short-circuit. [source]


    Structural fire design according to Eurocode 5,design rules and their background

    FIRE AND MATERIALS, Issue 3 2005
    Jürgen KönigArticle first published online: 18 NOV 200
    Abstract This paper gives a review of the design rules of EN 1995-1-2, the future common code of practice for the fire design of timber structures in the Member States of the EU and EFTA, and makes reference to relevant research background. Compared with the European pre-standard ENV 1995-1-2, the new EN 1995-1-2 has undergone considerable changes. Charring is dealt with in a more systematic way and different stages of protection and charring rates are applied. For the determination of cross-sectional strength and stiffness properties, two alternative rules are given, either by implicitly taking into account their reduction due to elevated temperature by reducing the residual cross-section by a zero-strength zone, or by calculating modification factors for strength and stiffness parameters. Design rules for charring and modification factors are also given for timber frame members of wall and floor assemblies with cavities filled with insulation. A modified components additive method has been included for the verification of the separating function. The design rules for connections have been systemized by introducing simple relationships between the load-bearing capacity (mechanical resistance) and time. The code provides for advanced calculation methods for thermal and structural analysis by giving thermal and thermo-mechanical properties for FE analyses. The code also gives some limited design rules for natural fire scenarios using parametric fire curves. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Linearization of second-order calibration curves in stable isotope dilution,mass spectrometry

    FLAVOUR AND FRAGRANCE JOURNAL, Issue 3 2001
    Laurent B. Fay
    Abstract The quantification of compounds using isotope dilution mass spectrometry requires the establishment of calibration curves prior to determination of any unknown sample. When calibration over a wide concentration range is required and/or when an overlap exists between internal standard and analyte ions (if mono- or di-isotopically-labelled internal standards are used), second-order calibration curves are obtained. In this paper we have compared several calculation methods to linearize such calibration curves. We found that the method published by Bush and Trager6 gives a satisfactory linear relationship between the corrected amount ratio y = Ql(Qu+tQl) (the value Qu being the amount of unlabelled analyte, Ql the amount of labelled internal standard and t, the fixed fraction of the internal standard, which is identical to the unlabelled analyte) and the ratio of unlabelled to labelled ion intensities. All the other calculation methods that have been published so far have failed to linearize the second-order calibration curve build-up over a wide concentration range. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Pharmacokinetic predictions in children by using the physiologically based pharmacokinetic modelling

    FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 6 2008
    F. Bouzom
    Abstract Nowadays, 50,90% of drugs used in children have never been actually studied in this population. Consequently, either our children are often exposed to the risk of adverse drug events or to lack of efficacy, or they are unable to benefit from a number of therapeutic advances offered to adults, as no clinical study has been properly performed in children. Actually the main methods used to calculate the dose for a child are based on allometric methods taking into account different categories of age, the body weight and/or the body surface area. Unfortunately, these calculation methods consider the children as small adults, which is not the case. Physiologically based pharmacokinetics is one way to integrate the physiological changes occurring in the childhood and to anticipate their impact on the pharmacokinetic processes: absorption, distribution, metabolism and excretion/elimination. From different examples, the application of this modelling approach is discussed as a possible and valuable method to minimize the ethical and technical difficulties of conducting research in children. [source]


    Linkage analysis with sequential imputation

    GENETIC EPIDEMIOLOGY, Issue 1 2003
    Zachary Skrivanek
    Abstract Multilocus calculations, using all available information on all pedigree members, are important for linkage analysis. Exact calculation methods in linkage analysis are limited in either the number of loci or the number of pedigree members they can handle. In this article, we propose a Monte Carlo method for linkage analysis based on sequential imputation. Unlike exact methods, sequential imputation can handle large pedigrees with a moderate number of loci in its current implementation. This Monte Carlo method is an application of importance sampling, in which we sequentially impute ordered genotypes locus by locus, and then impute inheritance vectors conditioned on these genotypes. The resulting inheritance vectors, together with the importance sampling weights, are used to derive a consistent estimator of any linkage statistic of interest. The linkage statistic can be parametric or nonparametric; we focus on nonparametric linkage statistics. We demonstrate that accurate estimates can be achieved within a reasonable computing time. A simulation study illustrates the potential gain in power using our method for multilocus linkage analysis with large pedigrees. We simulated data at six markers under three models. We analyzed them using both sequential imputation and GENEHUNTER. GENEHUNTER had to drop between 38,54% of pedigree members, whereas our method was able to use all pedigree members. The power gains of using all pedigree members were substantial under 2 of the 3 models. We implemented sequential imputation for multilocus linkage analysis in a user-friendly software package called SIMPLE. Genet Epidemiol 25:25,35, 2003. © 2003 Wiley-Liss, Inc. [source]


    A review of studies on the electric field and the current induced in a human body exposed to electromagnetic fields

    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 2 2006
    Tadasu Takuma Member
    Abstract How high an electric field or current is induced inside a human body when exposed to an electromagnetic field has recently attracted much attention. The background for this is twofold; concern about the possible health effects of electromagnetic fields (usually called ,EMF issues'), and their positive application to medical treatment or new research subjects. This paper reviews various aspects related to this topic in terms of the following items: basic formulas for field calculation, effect of electromagnetic fields, calculation methods, an Investigation Committee in the IEEJ, and future research subjects. © 2006 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]


    Analysis of microwave components and circuits using the iterative method

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 5 2004
    A. Mami
    Abstract This article presents an efficient implementation of an iterative method that includes a fast-mode transformation (FMT). The method has the advantages of simplicity and not involving basis functions and inversion of matrices, as used in other calculation methods. Therefore, this approach has the potential to be capable of analysing larger bodies than other classical techniques. An implementation of the iterative calculation is shown for the extraction of S parameters of microwave components and antennas. The good agreement between the simulation results and experimental published data justifies the design procedure and validates the present analysis approach. © 2004 Wiley Periodicals, Inc. Int J RF and Microwave CAE 14, 404,414, 2004. [source]


    Irrigation and drainage systems research and development in the 21st century,

    IRRIGATION AND DRAINAGE, Issue 4 2002
    Bart Schultz
    irrigation; drainage; développement durable; système de réseau Abstract One critical problem confronting mankind today is how to manage the intensifying competition for water between expanding urban centres, traditional agricultural activities and in-stream water uses dictated by environmental concerns. In the agricultural sector, the dwindling number of economically attractive sites for large-scale irrigation and drainage projects limits the prospects of increasing the gross cultivated area. Therefore, the required increase in agricultural production will necessarily rely largely on a more accurate estimation of crop water requirements on the one hand, and on major improvements in the construction, operation, management and performance of existing irrigation and drainage systems, on the other. The failings of present systems and the inability to sustainably exploit surface and groundwater resources can be attributed essentially to poor planning, design, system management and development. This is partly due to the inability of engineers, planners and managers to adequately quantify the effects of irrigation and drainage projects on water resources and to use these effects as guidelines for improving technology, design and management. To take full advantage of investments in agriculture, a major effort is required to modernize irrigation and drainage systems and to further develop appropriate management strategies compatible with the financial and socio-economic trends, and the environment. This calls for a holistic approach to irrigation and drainage management and monitoring so as to increase food production, conserve water, prevent soil salinization and waterlogging, and to protect the environment. All this requires, among others, enhanced research and a variety of tools such as water control and regulation equipment, remote sensing, geographic information systems, decision support systems and models, as well as field survey and evaluation techniques. To tackle this challenge, we need to focus on the following issues: affordability with respect to the application of new technologies; procedures for integrated planning and management of irrigation and drainage systems; analysis to identify causes and effects constraining irrigation and drainage system performance; evapotranspiration and related calculation methods; estimation of crop water requirements; technologies for the design, construction and modernization of irrigation and drainage systems; strategies to improve irrigation and drainage system efficiency; environmental impacts of irrigation and drainage and measures for creating and maintaining sustainability; institutional strengthening, proper financial assessment, capacity building, training and education. Copyright © 2002 John Wiley & Sons, Ltd. Résumé Aujourd'hui le problème critique pour l'humanité est comment manier la compétition intensifiante pour de l'eau entre les centres urbains en expansion, pour des activités traditionnellement agricoles et pour l'usage de l'eau fluviale prescrit par des conditions écologistes. Dans le secteur agricole les perspectives d'agrandir les champs cultivés bruts sont limitées par le nombre diminuant des terrains économiquement attractifs pour des projets d'irrigation et du drainage de grande envergure. Par conséquent l'augmentation nécessaire de la production agricole comptera surtout sur une évaluation plus précise du besoin des plantes d'un côté, et de l'autre sur de grandes améliorations dans la construction, dans l'opération, dans le management et dans la performance des systèmes d'irrigation et du drainage. On peut attribuer les défauts des systèmes actuels et l'incompétence d'exploiter durablement les ressources hydriques de surface et souterraines au planification, au système de la gestion de l'eau et au système du développement. Cela est partiellement dû à l'incapacité des ingénieurs, des planificateurs et des gérants, de quantifier adéquatement les effets des projets d'irrigation et de drainage sur les ressources hydriques et d'utiliser ces résultats pour améliorer la technologie, la planification et la gestion de l'eau. Pour profiter le mieux possible des investissements dans l'agriculture, on exige un effort considérable pour moderniser les systèmes d'irrigation et de drainage et pour développer des stratégies de gestion de l'eau qui doivent être appropriées et compatibles avec les tendances financières et socio-économiques et avec l'environnement. Ceci a besoin d'une procédure holistique pour la gestion et le monitorage de l'eau, pour augmenter la production d'aliments, pour conserver l'eau, pour prévenir la salination du sol et pour protéger l'environnement. Tout cela demande, entre autres choses, une recherche d'avant-garde et une variété d'instruments comme les contrôles du régime hydrique et les appareils de régulation, la télédétection, les systèmes de l'information géographique, les systèmes et les modèles de support de décision et de même les levés sur le terrain et les techniques d'évaluation. Pour entreprendre ce défi nous devons nous concentrer sur les questions suivantes: capacité de mettre enoeuvre des technologies nouvelles; le développement des procédures pour intégrer la planification et la gestion des systèmes d'irrigation et de drainage; l'analyse pour identifier les causes et les effets de forcer à la performance des systèmes d'irrigation et de drainage; l'évapotranspiration et les méthodes de calcul en question; l'évaluation des exigences hydriques des cultures; les technologies pour le dessein, la construction et la modernisation des projets d'irrigation et de drainage; les stratégies pour améliorer l'efficacité des systèmes d'irrigation et de drainage; les impacts des projets d'irrigation et de drainage et des mesures appropriées pour créer et entretenir la durabilité; l'amélioration du contexte institutionnel, l'évaluation financière, la formation et l'amélioration des compétences techniques. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Generalization of rank reduction problems with Wedderburn's formula

    JOURNAL OF CHEMOMETRICS, Issue 11 2003
    Joan Ferré
    Abstract In first- and second-order calibration methods based on spectroscopic data, the calculation of the space spanned by the spectra of the interferences has been an important research subject for, among many other applications, calculating the net analyte signal and obtaining figures of merit. Recently, many different calculation methods have been introduced. We show that the calculation of this space can be interpreted from a unified point of view, namely from the rank-one downdating Wedderburn formula. This formula enables one to better understand the properties of the calculation methods currently available. A number of recently introduced signal-preprocessing methods also fit into the proposed framework. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Theoretical studies on the role of ,-electron delocalization in determining the conformation of N-benzylideneaniline with three types of LMO basis sets

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 7 2006
    Peng Bao
    Abstract To understand the role of ,-electron delocalization in determining the conformation of the NBA (Ph,NCH,Ph) molecule, the following three LMO (localized molecular orbital) basis sets are constructed: a LFMO (highly localized fragment molecular orbital), an NBO (natural bond orbital), and a special NBO (NBO-II) basis sets, and their localization degrees are evaluated with our suggesting index DL. Afterward, the vertical resonance energy ,EV is obtained from the Morokuma's energy partition over each of three LMO basis sets. ,EV = ,EH (one electron energy) + ,Etwo (two electron energy), and ,Etwo = ,ECou (Coulomb) + ,Eex (exchange) + ,Eec (or ,,En) (electron correction). ,EH is always stabilizing, and ,ECou is destabilizing for all time. In the case of the LFMO basis set, ,ECou is so great that ,Etwo > |,EH|. Therefore, ,EV is always destabilizing, and is least destabilizing at about the , = 90° geometry. Of the three calculation methods such as HF, DFT, and MPn (n = 2, 3, and 4), the MPn method provides ,EV with the greatest value. In the case of the NBO basis set, on the contrary, ,EV is stabilizing due to ,ECou being less destabilizing, and it is most stabilizing at a planar geometry. The LFMO basis set has the highest localization degree, and it is most appropriate for the energy partition. In the NBA molecule, ,-electron delocalization is destabilization, and it has a tendency to distort the NBA molecular away from its planar geometry as far as possible. © 2006 Wiley Periodicals, Inc. J Comput Chem 27: 809,824, 2006 [source]


    Basics and applications of solid-state kinetics: A pharmaceutical perspective,

    JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 3 2006
    Ammar Khawam
    Abstract Most solid-state kinetic principles were derived from those for homogenous phases in the past century. Rate laws describing solid-state degradation are more complex than those in homogenous phases. Solid-state kinetic reactions can be mechanistically classified as nucleation, geometrical contraction, diffusion, and reaction order models. Experimentally, solid-state kinetics is studied either isothermally or nonisothermally. Many mathematical methods have been developed to interpret experimental data for both heating protocols. These methods generally fall into one of two categories: model-fitting and model-free. Controversies have arisen with regard to interpreting solid-state kinetic results, which include variable activation energy, calculation methods, and kinetic compensation effects. Solid-state kinetic studies have appeared in the pharmaceutical literature over many years; some of the more recent ones are discussed in this review. © 2006 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 95:472,498, 2006 [source]


    Lebensdauerermittlung bei mehrachsigen wechselnden Beanspruchungen im niedrigen und hohen Temperaturbereich

    MATERIALWISSENSCHAFT UND WERKSTOFFTECHNIK, Issue 9 2003
    E. Roos
    multiaxial fatigue; creep fatigue; stress theories; material laws Abstract Zur Berechnung der Dauerfestigkeit von Bauteilen aus duktilen Werkstoffen bei komplexer Schwingbeanspruchung stehen unterschiedliche Verfahren zur Verfügung. Hierbei wird im Wesentlichen zwischen den Festigkeitshypothesen der Integralen Anstrengung und denen der Kritischen Schnittebene unterschieden. Als typische Repräsentanten werden die Schubspannungsintensitätshypothese (SIH) sowie die Methode der kritischen Schnittebene (MKS) ausgewählt und für körperfeste und nicht körperfeste Hauptspannungsrichtungen gegenübergestellt. Für synchrone Beanspruchungen wird darüber hinaus das Berechnungsverfahren mit dem Anstrengungsverhältnis nach Bach verglichen. Die Berechnungsmethodik wird deutlich komplexer, wenn zeitabhängige Werkstoffeigenschaften bei entsprechend hohen Temperaturen mit in die Betrachtung einbezogen werden müssen. Für diesen Fall wird die Anwendung von viskoplastischen Stoffgesetzen erforderlich, die eine Beschreibung von Kriechen und Ermüdung in Kombination ermöglichen. Am Beispiel eines modifizierten Werkstoffmodells nach Chaboche / Nouailhas wird die Berechnung mehrachsiger Kriechermüdungsversuche vorgestellt. Life time assessment on multiaxial cyclic loadings at low and high temperatures For the calculation of fatigue strength of components made out of ductile materials under complex cyclic load different assessments are present. As typical representatives of stress theories the shear stress intensity hypothesis (SIH) as well as the method of critical plane approach (MKS) are considered and compared for rigid and non rigid principle stress directions. Furthermore for synchronous loads the calculation methods are compared with Bach's method. The calculation method becomes more complex, if time dependent material properties at corresponding high temperatures have to be taken into account. In this case the application of viscoplastic material models is necessary, which allows the consideration of combination of creep and fatigue. As an example a modified material model by Chaboche / Nouailhas is used in order to present the calculation of multiaxial creep fatigue tests. [source]


    Analytical evaluation of the Voigt function using binomial coefficients and incomplete gamma functions

    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2008
    B. A. Mamedov
    ABSTRACT Using the binomial expansion theorem, the simple general analytical expressions are obtained for the Voigt function arising in various fields of physical research. As we will seen, the present formulation yields compact closed-form expressions which enable the ready analytical calculation of the Voigt function. The validity of this approximation is tested by other calculation methods. The series expansion relations established in this work are accurate enough in the whole range of parameters. The convergence rate of the series is estimated and discussed. Some examples of this methodology are presented. [source]


    Two methods for calculating the amount of refrigerant required for cyclic temperature testing of insulated packages

    PACKAGING TECHNOLOGY AND SCIENCE, Issue 2 2007
    Kazuhisa Matsunaga
    Abstract This paper describes two calculation methods for estimating the amount of refrigerant required to maintain the temperature inside a small insulating container within a desired range under cyclic temperature conditions. The first calculation method is for a phase change material (PCM) that absorbs and releases heat by melting and solidifying. The PCM used in this study had a phase change temperature of 23.5°C (74.3°F). An equation for estimating the amount of the PCM required under cyclic conditions is shown. Test packages were constructed to meet USP Controlled Room Temperature (CRT) requirements. Several cyclic tests were conducted with the calculated amount of PCM in the test packages. The results showed that the calculated amount of PCM did maintain the inside temperature within the range 21.4,25.8°C (70.5,78.4°F) throughout the tests. This range met the USP CRT requirement. The second calculation method is for unfrozen gel packs that absorb and release heat by changing temperature. The amount of unfrozen gel pack required to maintain temperature within the USP range was calculated. Several cyclic tests were conducted with the calculated amount of gel packs. The calculated amount was enough to meet the USP requirement. Good agreement between the experimental and calculated temperature profiles was also found. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Critical buckling load of paper honeycomb under out-of-plane pressure,

    PACKAGING TECHNOLOGY AND SCIENCE, Issue 3 2005
    Li-Xin Lu
    Abstract Two out-of-plane buckling criteria for paper honeycomb are proposed by analysing the structure properties and the collapse mechanism of paper honeycomb: these are based on the peeling strength and ring crush strength of the chipboard wall. Taking into account the orthotropic, initial deflection and large deflection properties of the chipboard wall, the two new mechanical models and the calculation methods are developed to represent the out-of-plane critical load of paper honeycomb. Theoretical calculations and test results show that the models are suitable for describing the collapse mechanism of paper honeycomb. The peeling strength and ring crush strength determine the critical buckling load of paper honeycomb in different stretch phases. The out-of-plane critical buckling load can be predicted when the two models are integrated. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Reducing Infant Mortality Rates Using the Perinatal Periods of Risk Model

    PUBLIC HEALTH NURSING, Issue 1 2005
    Paulette G. Burns
    Abstract, Despite decreases in the last 50 years, infant mortality rates in the United States remain higher than in other industrialized countries. Using overall infant mortality rates to determine the effectiveness of interventions does not help communities focus on particular underlying factors contributing to static, and sometimes increasing, community rates. This study was designed to determine and rank contributing factors to fetal-infant mortality in a specific community using the Perinatal Periods of Risk (PPOR) model. The PPOR model was used to map fetal-infant mortality for 1995 to 1998 in the Tulsa, Oklahoma, Healthy Start Program as compared to traditional calculation methods. The overall fetal-infant mortality rate using the PPOR model was 12.7 compared to 7.11 calculated using the traditional method. The maternal health cell rate was 5.4, maternal care cell rate was 2.9, newborn care cell was 1.9 compared to a 4.1 neonatal death rate calculated using the traditional method, and the infant health cell was 2.4 compared to a 2.9 postneonatal rate calculated using the traditional method. Because the highest infant mortality was in the maternal health cell, intervention strategies were designed to promote the health of women prior to and between pregnancies. The PPOR model was helpful in targeting interventions to reduce fetal-infant mortality based on the prioritization of contributing factors. [source]


    Technical note: Standardized and semiautomated Harris lines detection

    AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 3 2008
    S. Suter
    Abstract Arrest in long bone growth and the subsequent resumption of growth may be visible as radiopaque transverse lines in radiographs (Harris lines, HL; Harris, HA. 1933. Bone growth in health and disease. London: Oxford University Press). The assessment of individual age at occurrence of such lines, as part of paleopathological skeletal studies, is time-consuming and shows large intra- and interobserver variability. Thus, a standardized, automated detection algorithm would help to increase the validity of such paleopathological research. We present an image analysis application facilitating automatic detection of HL. On the basis of established age calculation methods, the individual age-at-formation can be automatically assessed with the tool presented. Additional user input to confirm the automatic result is possible via an intuitive graphical user interface. Automated detection of HL from digital radiographs of a sample of late Medieval Swiss tibiae was compared to the consensus of manual assessment by two blinded expert observers. The intra- and interobserver variability was high. The quality of the observer result improved when standardized detection criteria were defined and applied. The newly developed algorithm detected two-thirds of the HL that were identified as consensus lines between the observers. It was, however, necessary to validate the last one-third by manual editing. The lack of a large test series must be noted. The application is freely available for further testing by any interested researcher. Am J Phys Anthropol, 2008. © 2008 Wiley-Liss, Inc. [source]


    Effect of D-mannitol on feed digestion and cecotrophic system in rabbits

    ANIMAL SCIENCE JOURNAL, Issue 2 2009
    Hamza HANIEH
    ABSTRACT This study aimed to evaluate the effect of sugar alcohol as an energy source for cecal microbes on digestibility, cecotrophy (i.e. reingestion of microbial products of cecum, cecotrophs) and performance in rabbits. Thus, we fed rabbits an experimental diet that included 5% of D-mannitol, and collected hard feces and cecotrophs to be analyzed for crude protein (CP), acid detergent fiber (ADF), ether extract (EE), crude ash (CA) and dry matter (DM). Cecotrophic behavior of the rabbits was also observed. Feeding D-mannitol increased (P < 0.01) digestibility of ADF, resulting in a decrease (P < 0.05) in the concentration in hard feces. The increase (P < 0.05) in CP concentration was attributed to lower (P < 0.05) digestibility. D-mannitol had a similar modulatory effect on CP and ADF concentrations in hard feces and cecotrophs. Accordingly, estimations of the proportion of nutrients recycled by cecotrophy to dietary intake (PR), obtained by the two calculation methods, showed an increase (P < 0.01) in PR of CP and a decrease (P < 0.05) in that of ADF. Daily weight gain and feed efficiency increased (P < 0.05) for D-mannitol-fed rabbits, while daily feed intake decreased (P < 0.05). These results suggest the possibility of using D-mannitol as a stimulator of cecal microbial growth and cellulolytic activity, and therefore, improved rabbits performance. [source]


    Schallfeldsimulation mit Spiegelquellen , Eine Planungshilfe für reflexionsarme Räume

    BAUPHYSIK, Issue 4 2009
    Xueqin Zha Prof.
    Schall; Berechnungsverfahren; sound protection and acoustics; calculation methods Abstract Konventionelle Auslegungen von akustischen Freifeldräumen nach ISO 3745 sind häufig mit Risiken behaftet, weil die übliche Annahme eines Absorptionsgrades bei senkrechtem Schalleinfall von 99 % herkömmlicher faseriger oder poröser Auskleidungen von Fall zu Fall weder notwendig noch ausreichend sein kann. Es wird ein Simulationsprogramm vorgestellt, das mit der phasenrichtigen Überlagerung der Schallwellen einer realen Punkt- und einer Serie von Spiegelquellen arbeitet, welche die unvollständig absorbierenden Begrenzungsflächen des Raumes ersetzen. Damit werden verschiedene Einflüsse aufgezeigt, die die Freifeldeigenschaften ebenso stark beeinflussen können wie der Absorptionsgrad der Auskleidung. So lässt sich bereits in einem frühen Planungsstadium mehr Sicherheit über die Qualität eines Akustik-Prüfstandes zur Bestimmung von Schallleistung, Spektrum und Richtcharakteristik technischer Schallquellen gewinnen. Sound field simulation by image sources , A design tool for anechoic rooms. Conventional designs of anechoic rooms according to ISO 3745 often bear risks since the usual assumption of an absorption coefficient at normal incidence of 99 % of traditional fibrous or porous claddings may be not necessary in one case but insufficient in another. A simulation program is presented which is based on the wave interferences of the sound from a real point source and a number of image sources which replace imperfectly absorbing bounding surfaces. Its application demonstrates various effects which can influence the free-field characteristics to the same extent as the absorption of the cladding. By this one may gain more confidence , at an early stage of the planning process , in the quality of an acoustic test facility for the determination of sound power, spectrum and directivity of technical sound sources. [source]


    Anwendung von massiv paralleler Berechnung mit Grafikkarten (GPGPU) für CFD-Methoden im Brandschutz

    BAUPHYSIK, Issue 4 2009
    Hendrik C. Belaschk Dipl.-Ing.
    Berechnungsverfahren; Brandschutz; calculation methods; fire protection engineering Abstract Der Einsatz von Brandsimulationsprogrammen, die auf den Methoden der Computational Fluid Dynamics (CFD) beruhen, wird in der Praxis immer breiter. Infolge der Zunahme von verfügbarer Rechenleistung in der Computertechnik können heute die Auswirkungen möglicher Brandszenarien nachgebildet und daraus nützliche Informationen für den Anwendungsfall gewonnen werden (z. B. Nachweis der Zuverlässigkeit von Brandschutzkonzepten). Trotz der erzielten Fortschritte reicht die Leistung von heute verfügbaren Computern bei weitem nicht aus, um einen Gebäudebrand mit allen beteiligten physikalischen und chemischen Prozessen mit der höchstmöglichen Genauigkeit zu simulieren. Die in den Computerprogrammen zur Berechnung der Brand- und Rauchausbreitung implementierten Modelle stellen daher immer einen Kompromiss zwischen der praktischen Recheneffizienz und dem Detailgrad der Modellierung dar. Im folgenden Aufsatz wird gezeigt, worin die Ursachen für den hohen Rechenbedarf der CFD-Methoden liegen und welche Problemstellungen und möglichen Fehlerquellen sich aus den getroffenen Modellvereinfachungen für den Ingenieur ergeben. Darüber hinaus wird ein neuer Technologieansatz vorgestellt, der die Rechenleistung eines Personalcomputers unter Verwendung spezieller Software und handelsüblicher 3D-Grafikkarten massiv erhöht. Hierzu wird am Beispiel des Fire Dynamics Simulator (FDS) demonstriert, dass sich die erforderliche Berechnungszeit für eine Brandsimulation auf einem Personalcomputer um den Faktor 20 und mehr verringern lässt. Application of general-purpose computing on graphics processing units (GPGPU) in CFD techniques for fire safety simulations. The use of fire simulation programs based on computational fluid dynamics (CFD) techniques is becoming more and more widespread in practice. The increase in available computing power enables the effects of possible fire scenarios to be modelled in order to derive useful information for practical applications (e.g. analysis of the reliability of fire protection concepts). However, despite the progress in computing power the performance of currently available computers is inadequate for simulating a building fire including all relevant physical and chemical processes with maximum accuracy. The models for calculating the spread of fire and smoke implemented in the computer programs therefore always represent a compromise between practical computing efficiency and level of modelling detail. This paper illustrates the reasons for the high computing power demand of CFD techniques and describes potential problems and sources of error resulting from simplifications applied in the models. In addition, the paper presents a new technology approach that significantly increases the computing power of a PC using special software and standard 3D graphics cards. The Fire Dynamics Simulator (FDS) is used as an example to demonstrate how the required calculation time for a fire simulation on a PC can be reduced by a factor of 20 and more. [source]


    Cerebral palsy and intrauterine growth in single births: European collaborative study

    CHILD: CARE, HEALTH AND DEVELOPMENT, Issue 2 2004
    Richard Reading
    Background Cerebral palsy seems to be more common in term babies whose birthweight is low for their gestational age at delivery, but past analyses have been hampered by small datasets and Z -score calculation methods. Methods We compared data from 10 European registers for 4503 singleton children with cerebral palsy born between 1976 and 1990 with the number of births in each study population. Weight and gestation of these children were compared with reference standards for the normal spread of gestation and weight-for-gestational age at birth. Findings Babies of 32,42 weeks' gestation with a birthweight for gestational age below the 10th percentile (using fetal growth standards) were 4,6 times more likely to have cerebral palsy than were children in a reference band between the 25th and 75th percentiles. In children with a weight above the 97th percentile, the increased risk was smaller (from 1.6 to 3.1), but still significant. Those with a birthweight about 1 SD above average always had the lowest risk of cerebral palsy. A similar pattern was seen in those with unilateral or bilateral spasticity, as in those with a dyskinetic or ataxic disability. In babies of less than 32 weeks' gestation, the relation between weight and risk was less clear. Interpretation The risk of cerebral palsy, like the risk of perinatal death, is lowest in babies who are of above average weight-for-gestation at birth, but risk rises when weight is well above normal as well as when it is well below normal. Whether deviant growth is the cause or a consequence of the disability remains to be determined. [source]